Test Report: Docker_Windows 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24216
                    
                

Test fail (149/220)

Order failed test Duration
20 TestOffline 91.77
22 TestAddons/Setup 74.8
23 TestCertOptions 97.58
24 TestCertExpiration 385.63
25 TestDockerFlags 96.68
26 TestForceSystemdFlag 94.43
27 TestForceSystemdEnv 94.51
32 TestErrorSpam/setup 73.66
41 TestFunctional/serial/StartWithProxy 78.72
42 TestFunctional/serial/AuditLog 0
43 TestFunctional/serial/SoftStart 114.04
44 TestFunctional/serial/KubeContext 4.25
45 TestFunctional/serial/KubectlGetPods 4.17
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.11
53 TestFunctional/serial/CacheCmd/cache/cache_reload 12.24
55 TestFunctional/serial/MinikubeKubectlCmd 5.9
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.88
57 TestFunctional/serial/ExtraConfig 113.83
58 TestFunctional/serial/ComponentHealth 4.16
59 TestFunctional/serial/LogsCmd 3.55
60 TestFunctional/serial/LogsFileCmd 4.47
66 TestFunctional/parallel/StatusCmd 13.35
69 TestFunctional/parallel/ServiceCmd 5.42
70 TestFunctional/parallel/ServiceCmdConnect 5.52
72 TestFunctional/parallel/PersistentVolumeClaim 4.21
74 TestFunctional/parallel/SSHCmd 11.05
75 TestFunctional/parallel/CpCmd 13.27
76 TestFunctional/parallel/MySQL 4.6
77 TestFunctional/parallel/FileSync 7.63
78 TestFunctional/parallel/CertSync 24.4
82 TestFunctional/parallel/NodeLabels 4.62
84 TestFunctional/parallel/NonActiveRuntimeDisabled 3.49
87 TestFunctional/parallel/Version/components 3.31
88 TestFunctional/parallel/DockerEnv/powershell 9.42
89 TestFunctional/parallel/UpdateContextCmd/no_changes 3.33
90 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.24
91 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.33
95 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
101 TestFunctional/parallel/ImageCommands/ImageListShort 3
102 TestFunctional/parallel/ImageCommands/ImageListTable 2.91
103 TestFunctional/parallel/ImageCommands/ImageListJson 3.03
104 TestFunctional/parallel/ImageCommands/ImageListYaml 2.99
105 TestFunctional/parallel/ImageCommands/ImageBuild 9.18
106 TestFunctional/parallel/ImageCommands/Setup 2.19
107 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.49
108 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.3
110 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.17
111 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.11
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.34
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.15
122 TestIngressAddonLegacy/StartLegacyK8sCluster 77.66
124 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 7
126 TestIngressAddonLegacy/serial/ValidateIngressAddons 3.93
129 TestJSONOutput/start/Command 74.06
130 TestJSONOutput/start/Audit 0
132 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0.01
135 TestJSONOutput/pause/Command 3.1
136 TestJSONOutput/pause/Audit 0
141 TestJSONOutput/unpause/Command 3.07
142 TestJSONOutput/unpause/Audit 0
147 TestJSONOutput/stop/Command 22.06
148 TestJSONOutput/stop/Audit 0
150 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0.01
154 TestKicCustomNetwork/create_custom_network 241.25
156 TestKicExistingNetwork 4.12
157 TestKicCustomSubnet 236.19
159 TestMinikubeProfile 94.94
162 TestMountStart/serial/StartWithMountFirst 78.58
165 TestMultiNode/serial/FreshStart2Nodes 78.19
166 TestMultiNode/serial/DeployApp2Nodes 17.11
167 TestMultiNode/serial/PingHostFrom2Pods 5.81
168 TestMultiNode/serial/AddNode 6.91
169 TestMultiNode/serial/ProfileList 7.67
170 TestMultiNode/serial/CopyFile 6.65
171 TestMultiNode/serial/StopNode 10.01
172 TestMultiNode/serial/StartAfterStop 8.47
173 TestMultiNode/serial/RestartKeepsNodes 136.93
174 TestMultiNode/serial/DeleteNode 9.79
175 TestMultiNode/serial/StopMultiNode 31.73
176 TestMultiNode/serial/RestartMultiNode 115.17
177 TestMultiNode/serial/ValidateNameConflict 164.32
181 TestPreload 86.98
182 TestScheduledStopWindows 86.11
186 TestInsufficientStorage 29.35
187 TestRunningBinaryUpgrade 343.66
189 TestKubernetesUpgrade 113.49
190 TestMissingContainerUpgrade 376.06
201 TestNoKubernetes/serial/StartWithK8s 83.49
202 TestStoppedBinaryUpgrade/Upgrade 356.8
203 TestNoKubernetes/serial/StartWithStopK8s 117.37
204 TestNoKubernetes/serial/Start 102.73
205 TestStoppedBinaryUpgrade/MinikubeLogs 3.39
218 TestPause/serial/Start 82.13
220 TestStartStop/group/old-k8s-version/serial/FirstStart 81.34
222 TestStartStop/group/no-preload/serial/FirstStart 81.37
224 TestStartStop/group/embed-certs/serial/FirstStart 81.28
225 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
226 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 7.08
227 TestStartStop/group/old-k8s-version/serial/Stop 26.58
228 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 10.12
229 TestStartStop/group/no-preload/serial/DeployApp 8.67
230 TestStartStop/group/old-k8s-version/serial/SecondStart 118.51
231 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 7.28
232 TestStartStop/group/embed-certs/serial/DeployApp 8.33
233 TestStartStop/group/no-preload/serial/Stop 26.47
234 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 7.09
235 TestStartStop/group/embed-certs/serial/Stop 26.87
236 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 9.86
237 TestStartStop/group/no-preload/serial/SecondStart 119.12
238 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 10.33
239 TestStartStop/group/embed-certs/serial/SecondStart 118.53
240 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 4.13
241 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 4.45
242 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 7.27
243 TestStartStop/group/old-k8s-version/serial/Pause 11.61
244 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 4.08
246 TestStartStop/group/default-k8s-different-port/serial/FirstStart 82.3
247 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 4.47
249 TestStartStop/group/newest-cni/serial/FirstStart 82.09
250 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.48
251 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 4.28
252 TestStartStop/group/no-preload/serial/Pause 11.69
253 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 4.37
254 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 7.45
255 TestStartStop/group/embed-certs/serial/Pause 11.55
256 TestNetworkPlugins/group/auto/Start 77.6
257 TestNetworkPlugins/group/kindnet/Start 77.74
258 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.58
261 TestStartStop/group/newest-cni/serial/Stop 27.02
262 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 7.32
263 TestStartStop/group/default-k8s-different-port/serial/Stop 26.95
264 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 10.19
265 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 10.1
266 TestStartStop/group/newest-cni/serial/SecondStart 122.35
267 TestNetworkPlugins/group/cilium/Start 77.96
268 TestStartStop/group/default-k8s-different-port/serial/SecondStart 121.38
269 TestNetworkPlugins/group/calico/Start 80.28
270 TestNetworkPlugins/group/false/Start 81.68
271 TestNetworkPlugins/group/bridge/Start 79.18
274 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 7.57
275 TestStartStop/group/newest-cni/serial/Pause 11.85
276 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 4.26
277 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 4.55
278 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.44
279 TestStartStop/group/default-k8s-different-port/serial/Pause 11.67
280 TestNetworkPlugins/group/enable-default-cni/Start 77.56
281 TestNetworkPlugins/group/kubenet/Start 77.52
x
+
TestOffline (91.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220601111410-9404 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20220601111410-9404 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: exit status 60 (1m18.82943s)

                                                
                                                
-- stdout --
	* [offline-docker-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node offline-docker-20220601111410-9404 in cluster offline-docker-20220601111410-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:14:10.361636    4204 out.go:296] Setting OutFile to fd 968 ...
	I0601 11:14:10.441428    4204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:14:10.441428    4204 out.go:309] Setting ErrFile to fd 632...
	I0601 11:14:10.441428    4204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:14:10.449426    4204 out.go:303] Setting JSON to false
	I0601 11:14:10.468412    4204 start.go:115] hostinfo: {"hostname":"minikube2","uptime":13985,"bootTime":1654068065,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:14:10.468412    4204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:14:10.475339    4204 out.go:177] * [offline-docker-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:14:10.480592    4204 notify.go:193] Checking for updates...
	I0601 11:14:10.483213    4204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:14:10.489063    4204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:14:10.495764    4204 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:14:10.501519    4204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:14:10.509246    4204 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:14:10.509301    4204 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:14:13.233055    4204 docker.go:137] docker version: linux-20.10.14
	I0601 11:14:13.239066    4204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:14:15.350952    4204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.111765s)
	I0601 11:14:15.352702    4204 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:14:14.2821704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:14:15.358824    4204 out.go:177] * Using the docker driver based on user configuration
	I0601 11:14:15.361792    4204 start.go:284] selected driver: docker
	I0601 11:14:15.361792    4204 start.go:806] validating driver "docker" against <nil>
	I0601 11:14:15.362192    4204 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:14:15.429688    4204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:14:17.641919    4204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2115114s)
	I0601 11:14:17.641919    4204 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:14:16.5418472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:14:17.641919    4204 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:14:17.642958    4204 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:14:17.645928    4204 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:14:17.647932    4204 cni.go:95] Creating CNI manager for ""
	I0601 11:14:17.648270    4204 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:14:17.648305    4204 start_flags.go:306] config:
	{Name:offline-docker-20220601111410-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220601111410-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:14:17.651167    4204 out.go:177] * Starting control plane node offline-docker-20220601111410-9404 in cluster offline-docker-20220601111410-9404
	I0601 11:14:17.654087    4204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:14:17.657221    4204 out.go:177] * Pulling base image ...
	I0601 11:14:17.659208    4204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:14:17.659208    4204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:14:17.659208    4204 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:14:17.659208    4204 cache.go:57] Caching tarball of preloaded images
	I0601 11:14:17.659208    4204 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:14:17.660208    4204 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:14:17.660208    4204 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220601111410-9404\config.json ...
	I0601 11:14:17.660208    4204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220601111410-9404\config.json: {Name:mk7b0de824ef63cd7ae3529b4ebb5119a5224636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:14:18.770426    4204 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:14:18.770524    4204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:14:18.770566    4204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:14:18.770566    4204 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:14:18.770566    4204 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:14:18.770566    4204 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:14:18.771225    4204 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:14:18.771225    4204 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:14:18.771319    4204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:14:21.535774    4204 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:14:21.535963    4204 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:14:21.536126    4204 start.go:352] acquiring machines lock for offline-docker-20220601111410-9404: {Name:mk5407c980da0b627ff2541485e447c1d1a28c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:14:21.536390    4204 start.go:356] acquired machines lock for "offline-docker-20220601111410-9404" in 264.3µs
	I0601 11:14:21.536685    4204 start.go:91] Provisioning new machine with config: &{Name:offline-docker-20220601111410-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-docker-20220601111410-9404 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:14:21.536923    4204 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:14:21.543991    4204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:14:21.543991    4204 start.go:165] libmachine.API.Create for "offline-docker-20220601111410-9404" (driver="docker")
	I0601 11:14:21.544577    4204 client.go:168] LocalClient.Create starting
	I0601 11:14:21.545236    4204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:14:21.545326    4204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:14:21.545326    4204 main.go:134] libmachine: Parsing certificate...
	I0601 11:14:21.545326    4204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:14:21.545893    4204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:14:21.545958    4204 main.go:134] libmachine: Parsing certificate...
	I0601 11:14:21.557642    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:14:22.648561    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:14:22.648561    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0909067s)
	I0601 11:14:22.655577    4204 network_create.go:272] running [docker network inspect offline-docker-20220601111410-9404] to gather additional debugging logs...
	I0601 11:14:22.655577    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404
	W0601 11:14:23.755953    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:23.755953    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404: (1.1003636s)
	I0601 11:14:23.755953    4204 network_create.go:275] error running [docker network inspect offline-docker-20220601111410-9404]: docker network inspect offline-docker-20220601111410-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220601111410-9404
	I0601 11:14:23.755953    4204 network_create.go:277] output of [docker network inspect offline-docker-20220601111410-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220601111410-9404
	
	** /stderr **
	I0601 11:14:23.762609    4204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:14:25.105361    4204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3427373s)
	I0601 11:14:25.125711    4204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000590330] misses:0}
	I0601 11:14:25.125810    4204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:14:25.125810    4204 network_create.go:115] attempt to create docker network offline-docker-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:14:25.132565    4204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404
	W0601 11:14:26.647961    4204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:26.647961    4204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: (1.5153787s)
	E0601 11:14:26.647961    4204 network_create.go:104] error while trying to create docker network offline-docker-20220601111410-9404 192.168.49.0/24: create docker network offline-docker-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e139ab5ad8293589ed0b12c2fd68a08e045655daf71f790cf6bf78add4f41d2 (br-6e139ab5ad82): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:14:26.648793    4204 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e139ab5ad8293589ed0b12c2fd68a08e045655daf71f790cf6bf78add4f41d2 (br-6e139ab5ad82): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e139ab5ad8293589ed0b12c2fd68a08e045655daf71f790cf6bf78add4f41d2 (br-6e139ab5ad82): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:14:26.665947    4204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:14:27.793392    4204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1274318s)
	I0601 11:14:27.800398    4204 cli_runner.go:164] Run: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:14:28.929138    4204 cli_runner.go:211] docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:14:28.929138    4204 cli_runner.go:217] Completed: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1287276s)
	I0601 11:14:28.929138    4204 client.go:171] LocalClient.Create took 7.3844781s
	I0601 11:14:30.954828    4204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:14:31.192443    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:14:32.263047    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:32.263095    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.070539s)
	I0601 11:14:32.263366    4204 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:32.555299    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:14:33.669375    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:33.669546    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.1138823s)
	W0601 11:14:33.669753    4204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:14:33.669821    4204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:33.679557    4204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:14:33.687636    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:14:34.788102    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:34.788241    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.100453s)
	I0601 11:14:34.788551    4204 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:35.093668    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:14:36.198533    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:36.198585    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.1046809s)
	W0601 11:14:36.198750    4204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:14:36.198811    4204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:36.198811    4204 start.go:134] duration metric: createHost completed in 14.6616906s
	I0601 11:14:36.198811    4204 start.go:81] releasing machines lock for "offline-docker-20220601111410-9404", held for 14.6622563s
	W0601 11:14:36.199074    4204 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	I0601 11:14:36.214174    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:37.284490    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:37.284592    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0703042s)
	I0601 11:14:37.284668    4204 delete.go:82] Unable to get host status for offline-docker-20220601111410-9404, assuming it has already been deleted: state: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	W0601 11:14:37.285023    4204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	I0601 11:14:37.285023    4204 start.go:614] Will try again in 5 seconds ...
	I0601 11:14:42.295210    4204 start.go:352] acquiring machines lock for offline-docker-20220601111410-9404: {Name:mk5407c980da0b627ff2541485e447c1d1a28c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:14:42.295491    4204 start.go:356] acquired machines lock for "offline-docker-20220601111410-9404" in 280.7µs
	I0601 11:14:42.295679    4204 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:14:42.295773    4204 fix.go:55] fixHost starting: 
	I0601 11:14:42.310191    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:43.366388    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:43.366388    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0561843s)
	I0601 11:14:43.366388    4204 fix.go:103] recreateIfNeeded on offline-docker-20220601111410-9404: state= err=unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:43.366388    4204 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:14:43.372387    4204 out.go:177] * docker "offline-docker-20220601111410-9404" container is missing, will recreate.
	I0601 11:14:43.374387    4204 delete.go:124] DEMOLISHING offline-docker-20220601111410-9404 ...
	I0601 11:14:43.387766    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:44.445666    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:44.445747    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0578427s)
	W0601 11:14:44.445928    4204 stop.go:75] unable to get state: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:44.445986    4204 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:44.460772    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:45.524101    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:45.524152    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0626385s)
	I0601 11:14:45.524343    4204 delete.go:82] Unable to get host status for offline-docker-20220601111410-9404, assuming it has already been deleted: state: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:45.532375    4204 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220601111410-9404
	W0601 11:14:46.616153    4204 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:14:46.616153    4204 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220601111410-9404: (1.0837025s)
	I0601 11:14:46.616153    4204 kic.go:356] could not find the container offline-docker-20220601111410-9404 to remove it. will try anyways
	I0601 11:14:46.624585    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:47.710626    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:47.710626    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0859755s)
	W0601 11:14:47.710763    4204 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:47.721692    4204 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-20220601111410-9404 /bin/bash -c "sudo init 0"
	W0601 11:14:48.821096    4204 cli_runner.go:211] docker exec --privileged -t offline-docker-20220601111410-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:14:48.821208    4204 cli_runner.go:217] Completed: docker exec --privileged -t offline-docker-20220601111410-9404 /bin/bash -c "sudo init 0": (1.0983155s)
	I0601 11:14:48.821407    4204 oci.go:625] error shutdown offline-docker-20220601111410-9404: docker exec --privileged -t offline-docker-20220601111410-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:49.841813    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:50.872377    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:50.872377    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0304696s)
	I0601 11:14:50.872377    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:50.872377    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:14:50.872377    4204 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:51.356329    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:52.421418    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:52.421446    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0649217s)
	I0601 11:14:52.421554    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:52.421674    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:14:52.421772    4204 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:53.327881    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:54.404018    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:54.404070    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0759082s)
	I0601 11:14:54.404283    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:54.404283    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:14:54.404283    4204 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:55.059347    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:56.164651    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:56.164651    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.1050774s)
	I0601 11:14:56.164897    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:56.164897    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:14:56.164996    4204 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:57.294824    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:14:58.386885    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:14:58.386938    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0919667s)
	I0601 11:14:58.386938    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:58.386938    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:14:58.386938    4204 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:14:59.922423    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:15:00.984916    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:15:00.984916    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0624802s)
	I0601 11:15:00.984916    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:00.984916    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:15:00.984916    4204 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:04.038757    4204 cli_runner.go:164] Run: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}
	W0601 11:15:05.088873    4204 cli_runner.go:211] docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:15:05.088873    4204 cli_runner.go:217] Completed: docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: (1.0501042s)
	I0601 11:15:05.088873    4204 oci.go:637] temporary error verifying shutdown: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:05.088873    4204 oci.go:639] temporary error: container offline-docker-20220601111410-9404 status is  but expect it to be exited
	I0601 11:15:05.088873    4204 oci.go:88] couldn't shut down offline-docker-20220601111410-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	 
	I0601 11:15:05.095842    4204 cli_runner.go:164] Run: docker rm -f -v offline-docker-20220601111410-9404
	I0601 11:15:06.192150    4204 cli_runner.go:217] Completed: docker rm -f -v offline-docker-20220601111410-9404: (1.0962958s)
	I0601 11:15:06.199180    4204 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220601111410-9404
	W0601 11:15:07.281599    4204 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:07.281671    4204 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} offline-docker-20220601111410-9404: (1.0822468s)
	I0601 11:15:07.288944    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:15:08.412497    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:15:08.412497    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1235405s)
	I0601 11:15:08.418493    4204 network_create.go:272] running [docker network inspect offline-docker-20220601111410-9404] to gather additional debugging logs...
	I0601 11:15:08.418493    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404
	W0601 11:15:09.506447    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:09.506447    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404: (1.0879416s)
	I0601 11:15:09.506447    4204 network_create.go:275] error running [docker network inspect offline-docker-20220601111410-9404]: docker network inspect offline-docker-20220601111410-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220601111410-9404
	I0601 11:15:09.506447    4204 network_create.go:277] output of [docker network inspect offline-docker-20220601111410-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220601111410-9404
	
	** /stderr **
	W0601 11:15:09.507454    4204 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:15:09.507454    4204 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:15:10.514142    4204 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:15:10.518290    4204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:15:10.518662    4204 start.go:165] libmachine.API.Create for "offline-docker-20220601111410-9404" (driver="docker")
	I0601 11:15:10.518770    4204 client.go:168] LocalClient.Create starting
	I0601 11:15:10.519260    4204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:15:10.519713    4204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:15:10.519835    4204 main.go:134] libmachine: Parsing certificate...
	I0601 11:15:10.520294    4204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:15:10.520399    4204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:15:10.520399    4204 main.go:134] libmachine: Parsing certificate...
	I0601 11:15:10.528855    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:15:11.587235    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:15:11.587438    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0581847s)
	I0601 11:15:11.593936    4204 network_create.go:272] running [docker network inspect offline-docker-20220601111410-9404] to gather additional debugging logs...
	I0601 11:15:11.593936    4204 cli_runner.go:164] Run: docker network inspect offline-docker-20220601111410-9404
	W0601 11:15:12.695731    4204 cli_runner.go:211] docker network inspect offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:12.695731    4204 cli_runner.go:217] Completed: docker network inspect offline-docker-20220601111410-9404: (1.1017823s)
	I0601 11:15:12.695731    4204 network_create.go:275] error running [docker network inspect offline-docker-20220601111410-9404]: docker network inspect offline-docker-20220601111410-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220601111410-9404
	I0601 11:15:12.695731    4204 network_create.go:277] output of [docker network inspect offline-docker-20220601111410-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220601111410-9404
	
	** /stderr **
	I0601 11:15:12.704086    4204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:15:13.838212    4204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1341126s)
	I0601 11:15:13.854178    4204 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590330] amended:false}} dirty:map[] misses:0}
	I0601 11:15:13.854178    4204 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:15:13.869206    4204 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000590330] amended:true}} dirty:map[192.168.49.0:0xc000590330 192.168.58.0:0xc0001acfe0] misses:0}
	I0601 11:15:13.869206    4204 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:15:13.869206    4204 network_create.go:115] attempt to create docker network offline-docker-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:15:13.880826    4204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404
	W0601 11:15:14.976778    4204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:14.976778    4204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: (1.0959395s)
	E0601 11:15:14.976778    4204 network_create.go:104] error while trying to create docker network offline-docker-20220601111410-9404 192.168.58.0/24: create docker network offline-docker-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f1de4c45e0ee8be1e2577dc2b4ac130b04cb8875a181237ec2a6286e789f89e (br-8f1de4c45e0e): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:15:14.976778    4204 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f1de4c45e0ee8be1e2577dc2b4ac130b04cb8875a181237ec2a6286e789f89e (br-8f1de4c45e0e): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f1de4c45e0ee8be1e2577dc2b4ac130b04cb8875a181237ec2a6286e789f89e (br-8f1de4c45e0e): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:15:14.990166    4204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:15:16.093439    4204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1030973s)
	I0601 11:15:16.100965    4204 cli_runner.go:164] Run: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:15:17.166616    4204 cli_runner.go:211] docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:15:17.166616    4204 cli_runner.go:217] Completed: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0656396s)
	I0601 11:15:17.166616    4204 client.go:171] LocalClient.Create took 6.6477715s
	I0601 11:15:19.189935    4204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:15:19.196886    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:20.267743    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:20.267743    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0708445s)
	I0601 11:15:20.267743    4204 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:20.612209    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:21.649875    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:21.649875    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0376544s)
	W0601 11:15:21.649875    4204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:15:21.649875    4204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:21.659877    4204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:15:21.665905    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:22.694387    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:22.694387    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0284704s)
	I0601 11:15:22.694387    4204 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:22.922856    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:24.027462    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:24.027462    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.1045939s)
	W0601 11:15:24.027462    4204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:15:24.027462    4204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:24.027462    4204 start.go:134] duration metric: createHost completed in 13.5129834s
	I0601 11:15:24.036472    4204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:15:24.042463    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:25.121550    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:25.121599    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0788887s)
	I0601 11:15:25.121731    4204 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:25.379706    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:26.443051    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:26.443051    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0628034s)
	W0601 11:15:26.443051    4204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:15:26.443051    4204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:26.452530    4204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:15:26.459094    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:27.527581    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:27.527669    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.0683776s)
	I0601 11:15:27.527837    4204 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:27.733757    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404
	W0601 11:15:28.870771    4204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404 returned with exit code 1
	I0601 11:15:28.870771    4204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: (1.136803s)
	W0601 11:15:28.870771    4204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	
	W0601 11:15:28.870771    4204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220601111410-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220601111410-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404
	I0601 11:15:28.870771    4204 fix.go:57] fixHost completed within 46.574476s
	I0601 11:15:28.870771    4204 start.go:81] releasing machines lock for "offline-docker-20220601111410-9404", held for 46.5747583s
	W0601 11:15:28.871388    4204 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-20220601111410-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p offline-docker-20220601111410-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	I0601 11:15:28.877130    4204 out.go:177] 
	W0601 11:15:28.881010    4204 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220601111410-9404 container: docker volume create offline-docker-20220601111410-9404 --label name.minikube.sigs.k8s.io=offline-docker-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220601111410-9404': mkdir /var/lib/docker/volumes/offline-docker-20220601111410-9404: read-only file system
	
	W0601 11:15:28.881010    4204 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:15:28.881010    4204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:15:28.883891    4204 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-20220601111410-9404 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker failed: exit status 60
panic.go:482: *** TestOffline FAILED at 2022-06-01 11:15:29.0252662 +0000 GMT m=+3139.289491101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20220601111410-9404

                                                
                                                
=== CONT  TestOffline
helpers_test.go:231: (dbg) Non-zero exit: docker inspect offline-docker-20220601111410-9404: exit status 1 (1.120811s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: offline-docker-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220601111410-9404 -n offline-docker-20220601111410-9404

                                                
                                                
=== CONT  TestOffline
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220601111410-9404 -n offline-docker-20220601111410-9404: exit status 7 (2.8795931s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:15:33.001993   10072 status.go:247] status error: host: state: unknown state "offline-docker-20220601111410-9404": docker container inspect offline-docker-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20220601111410-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20220601111410-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220601111410-9404

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220601111410-9404: (8.8185628s)
--- FAIL: TestOffline (91.77s)

                                                
                                    
x
+
TestAddons/Setup (74.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220601102510-9404 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-20220601102510-9404 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 60 (1m14.6989052s)

                                                
                                                
-- stdout --
	* [addons-20220601102510-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node addons-20220601102510-9404 in cluster addons-20220601102510-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20220601102510-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:25:10.622602    1408 out.go:296] Setting OutFile to fd 516 ...
	I0601 10:25:10.679240    1408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:25:10.679240    1408 out.go:309] Setting ErrFile to fd 644...
	I0601 10:25:10.679240    1408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:25:10.691783    1408 out.go:303] Setting JSON to false
	I0601 10:25:10.693350    1408 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11046,"bootTime":1654068064,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:25:10.693350    1408 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:25:10.695899    1408 out.go:177] * [addons-20220601102510-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:25:10.702116    1408 notify.go:193] Checking for updates...
	I0601 10:25:10.703878    1408 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:25:10.707086    1408 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:25:10.712391    1408 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:25:10.713198    1408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:25:10.715029    1408 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:25:13.247009    1408 docker.go:137] docker version: linux-20.10.14
	I0601 10:25:13.254217    1408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:25:15.237109    1408 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9828711s)
	I0601 10:25:15.238006    1408 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:25:14.2186468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:25:15.241384    1408 out.go:177] * Using the docker driver based on user configuration
	I0601 10:25:15.244314    1408 start.go:284] selected driver: docker
	I0601 10:25:15.244314    1408 start.go:806] validating driver "docker" against <nil>
	I0601 10:25:15.244314    1408 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:25:15.303313    1408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:25:17.307281    1408 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0036908s)
	I0601 10:25:17.307762    1408 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:25:16.2745793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:25:17.308791    1408 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:25:17.309960    1408 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 10:25:17.313478    1408 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 10:25:17.315924    1408 cni.go:95] Creating CNI manager for ""
	I0601 10:25:17.315924    1408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:25:17.315924    1408 start_flags.go:306] config:
	{Name:addons-20220601102510-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220601102510-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:25:17.318223    1408 out.go:177] * Starting control plane node addons-20220601102510-9404 in cluster addons-20220601102510-9404
	I0601 10:25:17.321258    1408 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:25:17.324421    1408 out.go:177] * Pulling base image ...
	I0601 10:25:17.326132    1408 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 10:25:17.326132    1408 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:25:17.326132    1408 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 10:25:17.326132    1408 cache.go:57] Caching tarball of preloaded images
	I0601 10:25:17.326132    1408 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 10:25:17.326132    1408 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 10:25:17.329793    1408 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220601102510-9404\config.json ...
	I0601 10:25:17.329793    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220601102510-9404\config.json: {Name:mk07275b3d4819446b1d0e01d21e3a6c8daee5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:25:18.343645    1408 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:25:18.343707    1408 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:25:18.344276    1408 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:25:18.344322    1408 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:25:18.344523    1408 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:25:18.344603    1408 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:25:18.344647    1408 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 10:25:18.344647    1408 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 10:25:18.344647    1408 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:25:20.525935    1408 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 10:25:20.525935    1408 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:25:20.525935    1408 start.go:352] acquiring machines lock for addons-20220601102510-9404: {Name:mk23421b0e7646163c4b9b32f761702141dee33b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:25:20.526642    1408 start.go:356] acquired machines lock for "addons-20220601102510-9404" in 674.4µs
	I0601 10:25:20.526841    1408 start.go:91] Provisioning new machine with config: &{Name:addons-20220601102510-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220601102510-9404 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 10:25:20.527099    1408 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:25:20.533191    1408 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:25:20.533191    1408 start.go:165] libmachine.API.Create for "addons-20220601102510-9404" (driver="docker")
	I0601 10:25:20.533820    1408 client.go:168] LocalClient.Create starting
	I0601 10:25:20.533820    1408 main.go:134] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:25:21.111331    1408 main.go:134] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:25:21.407230    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:25:22.388845    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:25:22.396719    1408 network_create.go:272] running [docker network inspect addons-20220601102510-9404] to gather additional debugging logs...
	I0601 10:25:22.396719    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404
	W0601 10:25:23.386847    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:23.386999    1408 network_create.go:275] error running [docker network inspect addons-20220601102510-9404]: docker network inspect addons-20220601102510-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220601102510-9404
	I0601 10:25:23.386999    1408 network_create.go:277] output of [docker network inspect addons-20220601102510-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220601102510-9404
	
	** /stderr **
	I0601 10:25:23.394986    1408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:25:24.415014    1408 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e2c0] misses:0}
	I0601 10:25:24.415014    1408 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:25:24.415014    1408 network_create.go:115] attempt to create docker network addons-20220601102510-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 10:25:24.417874    1408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404
	W0601 10:25:25.531874    1408 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:25.531990    1408 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: (1.1138002s)
	E0601 10:25:25.532073    1408 network_create.go:104] error while trying to create docker network addons-20220601102510-9404 192.168.49.0/24: create docker network addons-20220601102510-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0601 10:25:25.532297    1408 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220601102510-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220601102510-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0601 10:25:25.545072    1408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:25:26.552445    1408 cli_runner.go:164] Run: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:25:27.560861    1408 cli_runner.go:211] docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:25:27.561026    1408 cli_runner.go:217] Completed: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: (1.008219s)
	I0601 10:25:27.561026    1408 client.go:171] LocalClient.Create took 7.0271292s
	I0601 10:25:29.573781    1408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:25:29.582784    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:25:30.579832    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:30.579899    1408 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:30.875812    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:25:31.909505    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:31.909598    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0335028s)
	W0601 10:25:31.909681    1408 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:25:31.909681    1408 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:31.919395    1408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:25:31.928305    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:25:32.933100    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:32.933220    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0046172s)
	I0601 10:25:32.933220    1408 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:33.240734    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:25:34.265879    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:34.265879    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0235298s)
	W0601 10:25:34.265879    1408 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:25:34.265879    1408 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:34.265879    1408 start.go:134] duration metric: createHost completed in 13.73863s
	I0601 10:25:34.265879    1408 start.go:81] releasing machines lock for "addons-20220601102510-9404", held for 13.739076s
	W0601 10:25:34.266640    1408 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	I0601 10:25:34.279296    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:35.315252    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:35.315252    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0359447s)
	I0601 10:25:35.315252    1408 delete.go:82] Unable to get host status for addons-20220601102510-9404, assuming it has already been deleted: state: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	W0601 10:25:35.315252    1408 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	I0601 10:25:35.315252    1408 start.go:614] Will try again in 5 seconds ...
	I0601 10:25:40.323756    1408 start.go:352] acquiring machines lock for addons-20220601102510-9404: {Name:mk23421b0e7646163c4b9b32f761702141dee33b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:25:40.324141    1408 start.go:356] acquired machines lock for "addons-20220601102510-9404" in 219µs
	I0601 10:25:40.324467    1408 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:25:40.324626    1408 fix.go:55] fixHost starting: 
	I0601 10:25:40.337635    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:41.336289    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:41.336289    1408 fix.go:103] recreateIfNeeded on addons-20220601102510-9404: state= err=unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:41.336289    1408 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:25:41.340117    1408 out.go:177] * docker "addons-20220601102510-9404" container is missing, will recreate.
	I0601 10:25:41.342442    1408 delete.go:124] DEMOLISHING addons-20220601102510-9404 ...
	I0601 10:25:41.355183    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:42.352156    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	W0601 10:25:42.352156    1408 stop.go:75] unable to get state: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:42.352156    1408 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:42.369249    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:43.357712    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:43.357909    1408 delete.go:82] Unable to get host status for addons-20220601102510-9404, assuming it has already been deleted: state: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:43.364456    1408 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220601102510-9404
	W0601 10:25:44.387227    1408 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220601102510-9404 returned with exit code 1
	I0601 10:25:44.387388    1408 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220601102510-9404: (1.0225674s)
	I0601 10:25:44.387388    1408 kic.go:356] could not find the container addons-20220601102510-9404 to remove it. will try anyways
	I0601 10:25:44.394245    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:45.427667    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:45.427754    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0332164s)
	W0601 10:25:45.427913    1408 oci.go:84] error getting container status, will try to delete anyways: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:45.435372    1408 cli_runner.go:164] Run: docker exec --privileged -t addons-20220601102510-9404 /bin/bash -c "sudo init 0"
	W0601 10:25:46.446576    1408 cli_runner.go:211] docker exec --privileged -t addons-20220601102510-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:25:46.446899    1408 cli_runner.go:217] Completed: docker exec --privileged -t addons-20220601102510-9404 /bin/bash -c "sudo init 0": (1.0111929s)
	I0601 10:25:46.446899    1408 oci.go:625] error shutdown addons-20220601102510-9404: docker exec --privileged -t addons-20220601102510-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:47.455506    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:48.482865    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:48.482865    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0271969s)
	I0601 10:25:48.483000    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:48.483000    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:48.483000    1408 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:48.956113    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:49.976488    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:49.976673    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0203634s)
	I0601 10:25:49.976750    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:49.976750    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:49.976827    1408 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:50.877425    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:51.905916    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:51.905916    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0283719s)
	I0601 10:25:51.906165    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:51.906165    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:51.906236    1408 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:52.561690    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:53.562747    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:53.562975    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.001046s)
	I0601 10:25:53.563085    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:53.563153    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:53.563230    1408 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:54.694001    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:55.691030    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:55.691139    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:55.691139    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:55.691357    1408 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:57.211408    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:25:58.212916    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:25:58.212916    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0013024s)
	I0601 10:25:58.213047    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:25:58.213047    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:25:58.213047    1408 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:01.273109    1408 cli_runner.go:164] Run: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}
	W0601 10:26:02.321262    1408 cli_runner.go:211] docker container inspect addons-20220601102510-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:26:02.321501    1408 cli_runner.go:217] Completed: docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: (1.0481416s)
	I0601 10:26:02.321501    1408 oci.go:637] temporary error verifying shutdown: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:02.321501    1408 oci.go:639] temporary error: container addons-20220601102510-9404 status is  but expect it to be exited
	I0601 10:26:02.321501    1408 oci.go:88] couldn't shut down addons-20220601102510-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20220601102510-9404": docker container inspect addons-20220601102510-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	 
	I0601 10:26:02.332240    1408 cli_runner.go:164] Run: docker rm -f -v addons-20220601102510-9404
	I0601 10:26:03.367840    1408 cli_runner.go:217] Completed: docker rm -f -v addons-20220601102510-9404: (1.0354219s)
	I0601 10:26:03.374720    1408 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220601102510-9404
	W0601 10:26:04.423908    1408 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:04.423973    1408 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} addons-20220601102510-9404: (1.048945s)
	I0601 10:26:04.432956    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:26:05.445397    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:26:05.445397    1408 cli_runner.go:217] Completed: docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0120873s)
	I0601 10:26:05.452750    1408 network_create.go:272] running [docker network inspect addons-20220601102510-9404] to gather additional debugging logs...
	I0601 10:26:05.452750    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404
	W0601 10:26:06.467169    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:06.467169    1408 cli_runner.go:217] Completed: docker network inspect addons-20220601102510-9404: (1.0144079s)
	I0601 10:26:06.467169    1408 network_create.go:275] error running [docker network inspect addons-20220601102510-9404]: docker network inspect addons-20220601102510-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220601102510-9404
	I0601 10:26:06.467169    1408 network_create.go:277] output of [docker network inspect addons-20220601102510-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220601102510-9404
	
	** /stderr **
	W0601 10:26:06.468420    1408 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:26:06.468420    1408 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:26:07.478041    1408 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:26:07.482720    1408 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:26:07.483059    1408 start.go:165] libmachine.API.Create for "addons-20220601102510-9404" (driver="docker")
	I0601 10:26:07.483115    1408 client.go:168] LocalClient.Create starting
	I0601 10:26:07.483355    1408 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:26:07.484049    1408 main.go:134] libmachine: Decoding PEM data...
	I0601 10:26:07.484049    1408 main.go:134] libmachine: Parsing certificate...
	I0601 10:26:07.484120    1408 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:26:07.484120    1408 main.go:134] libmachine: Decoding PEM data...
	I0601 10:26:07.484120    1408 main.go:134] libmachine: Parsing certificate...
	I0601 10:26:07.493688    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:26:08.524283    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:26:08.524609    1408 cli_runner.go:217] Completed: docker network inspect addons-20220601102510-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0304387s)
	I0601 10:26:08.531442    1408 network_create.go:272] running [docker network inspect addons-20220601102510-9404] to gather additional debugging logs...
	I0601 10:26:08.531442    1408 cli_runner.go:164] Run: docker network inspect addons-20220601102510-9404
	W0601 10:26:09.538752    1408 cli_runner.go:211] docker network inspect addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:09.538981    1408 cli_runner.go:217] Completed: docker network inspect addons-20220601102510-9404: (1.0071165s)
	I0601 10:26:09.539001    1408 network_create.go:275] error running [docker network inspect addons-20220601102510-9404]: docker network inspect addons-20220601102510-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220601102510-9404
	I0601 10:26:09.539001    1408 network_create.go:277] output of [docker network inspect addons-20220601102510-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220601102510-9404
	
	** /stderr **
	I0601 10:26:09.546877    1408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:26:10.542727    1408 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060e2c0] amended:false}} dirty:map[] misses:0}
	I0601 10:26:10.542727    1408 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:26:10.558544    1408 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060e2c0] amended:true}} dirty:map[192.168.49.0:0xc00060e2c0 192.168.58.0:0xc00038c578] misses:0}
	I0601 10:26:10.560724    1408 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:26:10.560724    1408 network_create.go:115] attempt to create docker network addons-20220601102510-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 10:26:10.568060    1408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404
	W0601 10:26:11.691139    1408 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:11.691198    1408 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: (1.1230512s)
	E0601 10:26:11.691250    1408 network_create.go:104] error while trying to create docker network addons-20220601102510-9404 192.168.58.0/24: create docker network addons-20220601102510-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0601 10:26:11.691250    1408 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220601102510-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220601102510-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220601102510-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0601 10:26:11.706098    1408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:26:12.723850    1408 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0176914s)
	I0601 10:26:12.731364    1408 cli_runner.go:164] Run: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:26:13.755264    1408 cli_runner.go:211] docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:26:13.755338    1408 cli_runner.go:217] Completed: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0237018s)
	I0601 10:26:13.755410    1408 client.go:171] LocalClient.Create took 6.2722266s
	I0601 10:26:15.766092    1408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:26:15.769219    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:16.792857    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:16.793098    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0236269s)
	I0601 10:26:16.793098    1408 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:17.135464    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:18.150186    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:18.150353    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0147109s)
	W0601 10:26:18.150429    1408 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:26:18.150550    1408 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:18.160035    1408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:26:18.163183    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:19.227086    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:19.227086    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0637597s)
	I0601 10:26:19.227241    1408 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:19.464199    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:20.491472    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:20.491665    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0256421s)
	W0601 10:26:20.491827    1408 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:26:20.491877    1408 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:20.491877    1408 start.go:134] duration metric: createHost completed in 13.0134432s
	I0601 10:26:20.503062    1408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:26:20.505236    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:21.511257    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:21.511314    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.00588s)
	I0601 10:26:21.511314    1408 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:21.770138    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:22.778226    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:22.778476    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0080769s)
	W0601 10:26:22.778646    1408 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:26:22.778646    1408 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:22.787616    1408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:26:22.790465    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:23.807134    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:23.807134    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0166581s)
	I0601 10:26:23.807134    1408 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:24.019147    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404
	W0601 10:26:25.044113    1408 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404 returned with exit code 1
	I0601 10:26:25.044166    1408 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: (1.0249245s)
	W0601 10:26:25.044358    1408 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	
	W0601 10:26:25.044358    1408 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220601102510-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220601102510-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220601102510-9404
	I0601 10:26:25.044358    1408 fix.go:57] fixHost completed within 44.719245s
	I0601 10:26:25.044433    1408 start.go:81] releasing machines lock for "addons-20220601102510-9404", held for 44.7196627s
	W0601 10:26:25.044644    1408 out.go:239] * Failed to start docker container. Running "minikube delete -p addons-20220601102510-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p addons-20220601102510-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	I0601 10:26:25.052333    1408 out.go:177] 
	W0601 10:26:25.057129    1408 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220601102510-9404 container: docker volume create addons-20220601102510-9404 --label name.minikube.sigs.k8s.io=addons-20220601102510-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220601102510-9404: error while creating volume root path '/var/lib/docker/volumes/addons-20220601102510-9404': mkdir /var/lib/docker/volumes/addons-20220601102510-9404: read-only file system
	
	W0601 10:26:25.057129    1408 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 10:26:25.057129    1408 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 10:26:25.058178    1408 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:77: out/minikube-windows-amd64.exe start -p addons-20220601102510-9404 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 60
--- FAIL: TestAddons/Setup (74.80s)

                                                
                                    
x
+
TestCertOptions (97.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220601112212-9404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20220601112212-9404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 60 (1m17.3012945s)

                                                
                                                
-- stdout --
	* [cert-options-20220601112212-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-options-20220601112212-9404 in cluster cert-options-20220601112212-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20220601112212-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:22:28.282898    9816 network_create.go:104] error while trying to create docker network cert-options-20220601112212-9404 192.168.49.0/24: create docker network cert-options-20220601112212-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220601112212-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 931af6b6ca70f340d48317e7f00d107c58257706d84149dafae3decee4dd4c9f (br-931af6b6ca70): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220601112212-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220601112212-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 931af6b6ca70f340d48317e7f00d107c58257706d84149dafae3decee4dd4c9f (br-931af6b6ca70): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-options-20220601112212-9404 container: docker volume create cert-options-20220601112212-9404 --label name.minikube.sigs.k8s.io=cert-options-20220601112212-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220601112212-9404: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220601112212-9404': mkdir /var/lib/docker/volumes/cert-options-20220601112212-9404: read-only file system
	
	E0601 11:23:16.308066    9816 network_create.go:104] error while trying to create docker network cert-options-20220601112212-9404 192.168.58.0/24: create docker network cert-options-20220601112212-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220601112212-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 34f656693798c8ecfd29f013966ca9e6029965d205368075117081a1ca34ee42 (br-34f656693798): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220601112212-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220601112212-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 34f656693798c8ecfd29f013966ca9e6029965d205368075117081a1ca34ee42 (br-34f656693798): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-options-20220601112212-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220601112212-9404 container: docker volume create cert-options-20220601112212-9404 --label name.minikube.sigs.k8s.io=cert-options-20220601112212-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220601112212-9404: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220601112212-9404': mkdir /var/lib/docker/volumes/cert-options-20220601112212-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220601112212-9404 container: docker volume create cert-options-20220601112212-9404 --label name.minikube.sigs.k8s.io=cert-options-20220601112212-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220601112212-9404: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220601112212-9404': mkdir /var/lib/docker/volumes/cert-options-20220601112212-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20220601112212-9404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 60
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220601112212-9404 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-20220601112212-9404 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (3.2133746s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220601112212-9404": docker container inspect cert-options-20220601112212-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220601112212-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7b8531d53ef9e7bbc6fc851111559258d7d600b6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-20220601112212-9404 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-20220601112212-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20220601112212-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20220601112212-9404
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220601112212-9404 -- "sudo cat /etc/kubernetes/admin.conf"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-20220601112212-9404 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (3.2286332s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220601112212-9404": docker container inspect cert-options-20220601112212-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220601112212-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-20220601112212-9404 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220601112212-9404": docker container inspect cert-options-20220601112212-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220601112212-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-06-01 11:23:37.7885034 +0000 GMT m=+3628.047181601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20220601112212-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20220601112212-9404: exit status 1 (1.1312574s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-options-20220601112212-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220601112212-9404 -n cert-options-20220601112212-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220601112212-9404 -n cert-options-20220601112212-9404: exit status 7 (3.007269s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:23:41.908182    7760 status.go:247] status error: host: state: unknown state "cert-options-20220601112212-9404": docker container inspect cert-options-20220601112212-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220601112212-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20220601112212-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20220601112212-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220601112212-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220601112212-9404: (8.6016255s)
--- FAIL: TestCertOptions (97.58s)

                                                
                                    
x
+
TestCertExpiration (385.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=3m --driver=docker: exit status 60 (1m17.9627646s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220601112128-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cert-expiration-20220601112128-9404 in cluster cert-expiration-20220601112128-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220601112128-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:21:43.806833    8248 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.49.0/24: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e5e008391ac303523697f3a794e006002cc7d37607765465138b9e43a206142c (br-e5e008391ac3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e5e008391ac303523697f3a794e006002cc7d37607765465138b9e43a206142c (br-e5e008391ac3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	E0601 11:22:32.042621    8248 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.58.0/24: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0e4469ee5f2624403cffa1f8c1405a10fe84c63e9403e20c00782c487c3eb00a (br-0e4469ee5f26): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0e4469ee5f2624403cffa1f8c1405a10fe84c63e9403e20c00782c487c3eb00a (br-0e4469ee5f26): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220601112128-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 60

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=8760h --driver=docker: exit status 60 (1m54.914778s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220601112128-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220601112128-9404 in cluster cert-expiration-20220601112128-9404
	* Pulling base image ...
	* docker "cert-expiration-20220601112128-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220601112128-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:26:33.778091    2476 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.49.0/24: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a225ba49e4ee53ce8401f5268a7b9f91101f3744c3fa39a7c76eead6a59498a0 (br-a225ba49e4ee): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a225ba49e4ee53ce8401f5268a7b9f91101f3744c3fa39a7c76eead6a59498a0 (br-a225ba49e4ee): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	E0601 11:27:26.778054    2476 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.58.0/24: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 017e338b641fa65b654d56d41dd700485e63f730ebd54c32aa880ab7ad00f973 (br-017e338b641f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 017e338b641fa65b654d56d41dd700485e63f730ebd54c32aa880ab7ad00f973 (br-017e338b641f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220601112128-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-20220601112128-9404 --memory=2048 --cert-expiration=8760h --driver=docker" : exit status 60
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220601112128-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220601112128-9404 in cluster cert-expiration-20220601112128-9404
	* Pulling base image ...
	* docker "cert-expiration-20220601112128-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220601112128-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:26:33.778091    2476 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.49.0/24: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a225ba49e4ee53ce8401f5268a7b9f91101f3744c3fa39a7c76eead6a59498a0 (br-a225ba49e4ee): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a225ba49e4ee53ce8401f5268a7b9f91101f3744c3fa39a7c76eead6a59498a0 (br-a225ba49e4ee): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	E0601 11:27:26.778054    2476 network_create.go:104] error while trying to create docker network cert-expiration-20220601112128-9404 192.168.58.0/24: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 017e338b641fa65b654d56d41dd700485e63f730ebd54c32aa880ab7ad00f973 (br-017e338b641f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220601112128-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-expiration-20220601112128-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 017e338b641fa65b654d56d41dd700485e63f730ebd54c32aa880ab7ad00f973 (br-017e338b641f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220601112128-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220601112128-9404 container: docker volume create cert-expiration-20220601112128-9404 --label name.minikube.sigs.k8s.io=cert-expiration-20220601112128-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220601112128-9404: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220601112128-9404': mkdir /var/lib/docker/volumes/cert-expiration-20220601112128-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-06-01 11:27:41.0717662 +0000 GMT m=+3871.327663701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20220601112128-9404

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20220601112128-9404: exit status 1 (1.1812361s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-expiration-20220601112128-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220601112128-9404 -n cert-expiration-20220601112128-9404

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220601112128-9404 -n cert-expiration-20220601112128-9404: exit status 7 (3.0148984s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:45.247449    7528 status.go:247] status error: host: state: unknown state "cert-expiration-20220601112128-9404": docker container inspect cert-expiration-20220601112128-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20220601112128-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20220601112128-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20220601112128-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220601112128-9404

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220601112128-9404: (8.5348357s)
--- FAIL: TestCertExpiration (385.63s)

                                                
                                    
x
+
TestDockerFlags (96.68s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220601112157-9404 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20220601112157-9404 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m17.5936894s)

                                                
                                                
-- stdout --
	* [docker-flags-20220601112157-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node docker-flags-20220601112157-9404 in cluster docker-flags-20220601112157-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20220601112157-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:21:58.149035    4224 out.go:296] Setting OutFile to fd 1724 ...
	I0601 11:21:58.207582    4224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:21:58.207582    4224 out.go:309] Setting ErrFile to fd 1584...
	I0601 11:21:58.207582    4224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:21:58.220579    4224 out.go:303] Setting JSON to false
	I0601 11:21:58.222576    4224 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14453,"bootTime":1654068065,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:21:58.222576    4224 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:21:58.229589    4224 out.go:177] * [docker-flags-20220601112157-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:21:58.235585    4224 notify.go:193] Checking for updates...
	I0601 11:21:58.237580    4224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:21:58.240587    4224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:21:58.242574    4224 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:21:58.245575    4224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:21:58.248574    4224 config.go:178] Loaded profile config "cert-expiration-20220601112128-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:21:58.249575    4224 config.go:178] Loaded profile config "force-systemd-env-20220601112038-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:21:58.249575    4224 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:21:58.249575    4224 config.go:178] Loaded profile config "pause-20220601112115-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:21:58.249575    4224 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:22:00.970157    4224 docker.go:137] docker version: linux-20.10.14
	I0601 11:22:00.982623    4224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:03.196842    4224 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2140629s)
	I0601 11:22:03.197291    4224 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:22:02.0642092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:03.201520    4224 out.go:177] * Using the docker driver based on user configuration
	I0601 11:22:03.204885    4224 start.go:284] selected driver: docker
	I0601 11:22:03.204923    4224 start.go:806] validating driver "docker" against <nil>
	I0601 11:22:03.205044    4224 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:22:03.271757    4224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:05.433196    4224 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1611632s)
	I0601 11:22:05.433553    4224 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:22:04.3090845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:05.433752    4224 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:22:05.434338    4224 start_flags.go:842] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0601 11:22:05.444083    4224 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:22:05.447965    4224 cni.go:95] Creating CNI manager for ""
	I0601 11:22:05.448317    4224 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:22:05.448317    4224 start_flags.go:306] config:
	{Name:docker-flags-20220601112157-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-flags-20220601112157-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:22:05.451117    4224 out.go:177] * Starting control plane node docker-flags-20220601112157-9404 in cluster docker-flags-20220601112157-9404
	I0601 11:22:05.462909    4224 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:22:05.465792    4224 out.go:177] * Pulling base image ...
	I0601 11:22:05.469257    4224 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:22:05.469328    4224 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:22:05.469523    4224 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:22:05.469558    4224 cache.go:57] Caching tarball of preloaded images
	I0601 11:22:05.469625    4224 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:22:05.469625    4224 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:22:05.470285    4224 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220601112157-9404\config.json ...
	I0601 11:22:05.470285    4224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220601112157-9404\config.json: {Name:mk28c6ea2493a7b417197f01409b070f56214219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:22:06.550208    4224 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:22:06.550278    4224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:06.550334    4224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:06.550334    4224 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:22:06.550334    4224 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:22:06.550334    4224 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:22:06.550903    4224 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:22:06.550940    4224 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:22:06.550940    4224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:08.900304    4224 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-129587441: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-129587441: read-only file system"}
	I0601 11:22:08.900360    4224 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:22:08.900414    4224 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:22:08.900463    4224 start.go:352] acquiring machines lock for docker-flags-20220601112157-9404: {Name:mk132c01af00737c456c65bd1b9c00c01527bd55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:08.900463    4224 start.go:356] acquired machines lock for "docker-flags-20220601112157-9404" in 0s
	I0601 11:22:08.900463    4224 start.go:91] Provisioning new machine with config: &{Name:docker-flags-20220601112157-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:docker-fla
gs-20220601112157-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:22:08.901028    4224 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:22:08.905540    4224 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:22:08.905540    4224 start.go:165] libmachine.API.Create for "docker-flags-20220601112157-9404" (driver="docker")
	I0601 11:22:08.905540    4224 client.go:168] LocalClient.Create starting
	I0601 11:22:08.906178    4224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:22:08.906178    4224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:08.906713    4224 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:08.906794    4224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:22:08.906794    4224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:08.906794    4224 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:08.914622    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:22:10.008091    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:22:10.008091    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0934558s)
	I0601 11:22:10.015091    4224 network_create.go:272] running [docker network inspect docker-flags-20220601112157-9404] to gather additional debugging logs...
	I0601 11:22:10.015091    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404
	W0601 11:22:11.093136    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:11.093136    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404: (1.0780324s)
	I0601 11:22:11.093136    4224 network_create.go:275] error running [docker network inspect docker-flags-20220601112157-9404]: docker network inspect docker-flags-20220601112157-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220601112157-9404
	I0601 11:22:11.093136    4224 network_create.go:277] output of [docker network inspect docker-flags-20220601112157-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220601112157-9404
	
	** /stderr **
	I0601 11:22:11.100132    4224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:22:12.226266    4224 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1261203s)
	I0601 11:22:12.246260    4224 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078c3f8] misses:0}
	I0601 11:22:12.246260    4224 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:22:12.246260    4224 network_create.go:115] attempt to create docker network docker-flags-20220601112157-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:22:12.253254    4224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404
	W0601 11:22:13.358467    4224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:13.358467    4224 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: (1.1052005s)
	E0601 11:22:13.358467    4224 network_create.go:104] error while trying to create docker network docker-flags-20220601112157-9404 192.168.49.0/24: create docker network docker-flags-20220601112157-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e0ccc9ddbffe48055d65b24436dd4cb78e9ca5a33a3d3a68fdd214af7282a277 (br-e0ccc9ddbffe): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:22:13.358467    4224 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220601112157-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e0ccc9ddbffe48055d65b24436dd4cb78e9ca5a33a3d3a68fdd214af7282a277 (br-e0ccc9ddbffe): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220601112157-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e0ccc9ddbffe48055d65b24436dd4cb78e9ca5a33a3d3a68fdd214af7282a277 (br-e0ccc9ddbffe): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:22:13.371457    4224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:22:14.443541    4224 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0720713s)
	I0601 11:22:14.449554    4224 cli_runner.go:164] Run: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:22:15.547888    4224 cli_runner.go:211] docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:22:15.548015    4224 cli_runner.go:217] Completed: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: (1.098232s)
	I0601 11:22:15.548015    4224 client.go:171] LocalClient.Create took 6.6424s
	I0601 11:22:17.565532    4224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:22:17.577707    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:22:18.688746    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:18.688746    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.1110266s)
	I0601 11:22:18.688746    4224 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:18.982269    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:22:20.058916    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:20.058916    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0764063s)
	W0601 11:22:20.058916    4224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:22:20.058916    4224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:20.070294    4224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:22:20.076599    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:22:21.181408    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:21.181620    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.1047963s)
	I0601 11:22:21.181760    4224 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:21.483307    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:22:22.592350    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:22.592350    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.1090297s)
	W0601 11:22:22.592350    4224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:22:22.592350    4224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:22.592350    4224 start.go:134] duration metric: createHost completed in 13.6911659s
	I0601 11:22:22.592350    4224 start.go:81] releasing machines lock for "docker-flags-20220601112157-9404", held for 13.6917302s
	W0601 11:22:22.592350    4224 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	I0601 11:22:22.606407    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:23.686623    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:23.686623    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0801498s)
	I0601 11:22:23.686623    4224 delete.go:82] Unable to get host status for docker-flags-20220601112157-9404, assuming it has already been deleted: state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	W0601 11:22:23.686623    4224 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	I0601 11:22:23.686623    4224 start.go:614] Will try again in 5 seconds ...
	I0601 11:22:28.688676    4224 start.go:352] acquiring machines lock for docker-flags-20220601112157-9404: {Name:mk132c01af00737c456c65bd1b9c00c01527bd55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:28.689166    4224 start.go:356] acquired machines lock for "docker-flags-20220601112157-9404" in 312.1µs
	I0601 11:22:28.689389    4224 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:22:28.689418    4224 fix.go:55] fixHost starting: 
	I0601 11:22:28.708202    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:29.836207    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:29.836207    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.1279495s)
	I0601 11:22:29.836207    4224 fix.go:103] recreateIfNeeded on docker-flags-20220601112157-9404: state= err=unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:29.836207    4224 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:22:29.840873    4224 out.go:177] * docker "docker-flags-20220601112157-9404" container is missing, will recreate.
	I0601 11:22:29.842956    4224 delete.go:124] DEMOLISHING docker-flags-20220601112157-9404 ...
	I0601 11:22:29.856915    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:30.969583    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:30.969583    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.1124604s)
	W0601 11:22:30.969700    4224 stop.go:75] unable to get state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:30.969744    4224 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:30.985513    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:32.058619    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:32.058619    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0730932s)
	I0601 11:22:32.058619    4224 delete.go:82] Unable to get host status for docker-flags-20220601112157-9404, assuming it has already been deleted: state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:32.065469    4224 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220601112157-9404
	W0601 11:22:33.181169    4224 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:33.181214    4224 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220601112157-9404: (1.1154854s)
	I0601 11:22:33.181214    4224 kic.go:356] could not find the container docker-flags-20220601112157-9404 to remove it. will try anyways
	I0601 11:22:33.188403    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:34.311328    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:34.311328    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.1229124s)
	W0601 11:22:34.311328    4224 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:34.319043    4224 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-20220601112157-9404 /bin/bash -c "sudo init 0"
	W0601 11:22:35.430650    4224 cli_runner.go:211] docker exec --privileged -t docker-flags-20220601112157-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:22:35.430650    4224 cli_runner.go:217] Completed: docker exec --privileged -t docker-flags-20220601112157-9404 /bin/bash -c "sudo init 0": (1.1115942s)
	I0601 11:22:35.430650    4224 oci.go:625] error shutdown docker-flags-20220601112157-9404: docker exec --privileged -t docker-flags-20220601112157-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:36.437765    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:37.530316    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:37.530351    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0925007s)
	I0601 11:22:37.530351    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:37.530351    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:37.530351    4224 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:38.012303    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:39.080515    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:39.080621    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0681992s)
	I0601 11:22:39.080621    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:39.080621    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:39.080621    4224 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:39.994900    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:41.042915    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:41.042915    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0480038s)
	I0601 11:22:41.042915    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:41.042915    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:41.042915    4224 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:41.704206    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:42.781606    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:42.781793    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0773875s)
	I0601 11:22:42.781962    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:42.782037    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:42.782037    4224 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:43.901609    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:44.989112    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:44.989112    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0874906s)
	I0601 11:22:44.989112    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:44.989112    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:44.989112    4224 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:46.522166    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:47.593139    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:47.593139    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0706304s)
	I0601 11:22:47.593139    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:47.593139    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:47.593139    4224 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:50.656042    4224 cli_runner.go:164] Run: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}
	W0601 11:22:51.744032    4224 cli_runner.go:211] docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:22:51.744032    4224 cli_runner.go:217] Completed: docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: (1.0877595s)
	I0601 11:22:51.744106    4224 oci.go:637] temporary error verifying shutdown: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:22:51.744106    4224 oci.go:639] temporary error: container docker-flags-20220601112157-9404 status is  but expect it to be exited
	I0601 11:22:51.744106    4224 oci.go:88] couldn't shut down docker-flags-20220601112157-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	 
	I0601 11:22:51.750861    4224 cli_runner.go:164] Run: docker rm -f -v docker-flags-20220601112157-9404
	I0601 11:22:52.800447    4224 cli_runner.go:217] Completed: docker rm -f -v docker-flags-20220601112157-9404: (1.0495734s)
	I0601 11:22:52.806447    4224 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220601112157-9404
	W0601 11:22:53.901009    4224 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:53.901009    4224 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} docker-flags-20220601112157-9404: (1.0945491s)
	I0601 11:22:53.907014    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:22:54.973755    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:22:54.973755    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0667287s)
	I0601 11:22:54.979756    4224 network_create.go:272] running [docker network inspect docker-flags-20220601112157-9404] to gather additional debugging logs...
	I0601 11:22:54.979756    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404
	W0601 11:22:56.059871    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:56.059871    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404: (1.0801026s)
	I0601 11:22:56.059871    4224 network_create.go:275] error running [docker network inspect docker-flags-20220601112157-9404]: docker network inspect docker-flags-20220601112157-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220601112157-9404
	I0601 11:22:56.059871    4224 network_create.go:277] output of [docker network inspect docker-flags-20220601112157-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220601112157-9404
	
	** /stderr **
	W0601 11:22:56.061280    4224 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:22:56.061317    4224 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:22:57.067281    4224 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:22:57.073121    4224 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:22:57.073121    4224 start.go:165] libmachine.API.Create for "docker-flags-20220601112157-9404" (driver="docker")
	I0601 11:22:57.073121    4224 client.go:168] LocalClient.Create starting
	I0601 11:22:57.073884    4224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:22:57.073884    4224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:57.073884    4224 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:57.074563    4224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:22:57.074563    4224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:57.074563    4224 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:57.081872    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:22:58.172644    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:22:58.172644    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0907595s)
	I0601 11:22:58.180658    4224 network_create.go:272] running [docker network inspect docker-flags-20220601112157-9404] to gather additional debugging logs...
	I0601 11:22:58.181259    4224 cli_runner.go:164] Run: docker network inspect docker-flags-20220601112157-9404
	W0601 11:22:59.271965    4224 cli_runner.go:211] docker network inspect docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:22:59.271965    4224 cli_runner.go:217] Completed: docker network inspect docker-flags-20220601112157-9404: (1.0906935s)
	I0601 11:22:59.271965    4224 network_create.go:275] error running [docker network inspect docker-flags-20220601112157-9404]: docker network inspect docker-flags-20220601112157-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220601112157-9404
	I0601 11:22:59.271965    4224 network_create.go:277] output of [docker network inspect docker-flags-20220601112157-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220601112157-9404
	
	** /stderr **
	I0601 11:22:59.279946    4224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:23:00.377545    4224 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0973369s)
	I0601 11:23:00.395737    4224 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c3f8] amended:false}} dirty:map[] misses:0}
	I0601 11:23:00.396468    4224 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:00.412871    4224 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c3f8] amended:true}} dirty:map[192.168.49.0:0xc00078c3f8 192.168.58.0:0xc0006d02b8] misses:0}
	I0601 11:23:00.412871    4224 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:00.412871    4224 network_create.go:115] attempt to create docker network docker-flags-20220601112157-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:23:00.420510    4224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404
	W0601 11:23:01.469970    4224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:01.469970    4224 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: (1.049448s)
	E0601 11:23:01.469970    4224 network_create.go:104] error while trying to create docker network docker-flags-20220601112157-9404 192.168.58.0/24: create docker network docker-flags-20220601112157-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ad3e779891253b908a1678b9a7c8a392ed6743d179c2326450782af24dd9641d (br-ad3e77989125): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:23:01.469970    4224 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220601112157-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ad3e779891253b908a1678b9a7c8a392ed6743d179c2326450782af24dd9641d (br-ad3e77989125): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220601112157-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ad3e779891253b908a1678b9a7c8a392ed6743d179c2326450782af24dd9641d (br-ad3e77989125): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:23:01.485735    4224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:23:02.571548    4224 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0858001s)
	I0601 11:23:02.579801    4224 cli_runner.go:164] Run: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:23:03.661509    4224 cli_runner.go:211] docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:23:03.661653    4224 cli_runner.go:217] Completed: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0816962s)
	I0601 11:23:03.661855    4224 client.go:171] LocalClient.Create took 6.5886577s
	I0601 11:23:05.686121    4224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:05.694340    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:06.776879    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:06.777016    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0823924s)
	I0601 11:23:06.777110    4224 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:07.120198    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:08.180912    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:08.181125    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0607013s)
	W0601 11:23:08.181325    4224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:23:08.181361    4224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:08.191641    4224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:23:08.197073    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:09.276442    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:09.276675    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0793561s)
	I0601 11:23:09.276675    4224 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:09.530365    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:10.618171    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:10.619200    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0877928s)
	W0601 11:23:10.621181    4224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:23:10.622125    4224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:10.622125    4224 start.go:134] duration metric: createHost completed in 13.554688s
	I0601 11:23:10.635744    4224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:10.644689    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:11.690918    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:11.690952    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.046045s)
	I0601 11:23:11.691050    4224 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:11.947834    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:13.049391    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:13.049459    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.1013172s)
	W0601 11:23:13.049531    4224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:23:13.049531    4224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:13.059358    4224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:23:13.065442    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:14.145738    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:14.145738    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0802835s)
	I0601 11:23:14.145738    4224 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:14.364589    4224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404
	W0601 11:23:15.462382    4224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404 returned with exit code 1
	I0601 11:23:15.462509    4224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: (1.0976254s)
	W0601 11:23:15.462642    4224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	W0601 11:23:15.462714    4224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220601112157-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220601112157-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	I0601 11:23:15.462714    4224 fix.go:57] fixHost completed within 46.7727582s
	I0601 11:23:15.462793    4224 start.go:81] releasing machines lock for "docker-flags-20220601112157-9404", held for 46.7730892s
	W0601 11:23:15.463431    4224 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-20220601112157-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-20220601112157-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	I0601 11:23:15.468331    4224 out.go:177] 
	W0601 11:23:15.470577    4224 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220601112157-9404 container: docker volume create docker-flags-20220601112157-9404 --label name.minikube.sigs.k8s.io=docker-flags-20220601112157-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220601112157-9404: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220601112157-9404': mkdir /var/lib/docker/volumes/docker-flags-20220601112157-9404: read-only file system
	
	W0601 11:23:15.470728    4224 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:23:15.470967    4224 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:23:15.474472    4224 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20220601112157-9404 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (3.2626357s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d4f85ee29175a4f8b67ccfa3331e6e8264cb6e77_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (3.2361205s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_e7205990054f4366ee7f5bb530c13b1f3df973dc_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:67: expected "out/minikube-windows-amd64.exe -p docker-flags-20220601112157-9404 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:482: *** TestDockerFlags FAILED at 2022-06-01 11:23:22.0912473 +0000 GMT m=+3612.350106001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20220601112157-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20220601112157-9404: exit status 1 (1.1342381s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: docker-flags-20220601112157-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220601112157-9404 -n docker-flags-20220601112157-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220601112157-9404 -n docker-flags-20220601112157-9404: exit status 7 (2.9826401s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:23:26.188572    7984 status.go:247] status error: host: state: unknown state "docker-flags-20220601112157-9404": docker container inspect docker-flags-20220601112157-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220601112157-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20220601112157-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20220601112157-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220601112157-9404

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220601112157-9404: (8.369664s)
--- FAIL: TestDockerFlags (96.68s)

                                                
                                    
x
+
TestForceSystemdFlag (94.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220601111953-9404 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220601111953-9404 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 60 (1m18.2658552s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220601111953-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-flag-20220601111953-9404 in cluster force-systemd-flag-20220601111953-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20220601111953-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:19:54.009109    5312 out.go:296] Setting OutFile to fd 1744 ...
	I0601 11:19:54.068969    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:54.068969    5312 out.go:309] Setting ErrFile to fd 1756...
	I0601 11:19:54.068969    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:54.083212    5312 out.go:303] Setting JSON to false
	I0601 11:19:54.086385    5312 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14329,"bootTime":1654068065,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:19:54.087137    5312 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:19:54.091822    5312 out.go:177] * [force-systemd-flag-20220601111953-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:19:54.095103    5312 notify.go:193] Checking for updates...
	I0601 11:19:54.097366    5312 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:19:54.099938    5312 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:19:54.102017    5312 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:54.104869    5312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:54.108229    5312 config.go:178] Loaded profile config "kubernetes-upgrade-20220601111922-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:19:54.109060    5312 config.go:178] Loaded profile config "missing-upgrade-20220601111541-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:54.109496    5312 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:19:54.109639    5312 config.go:178] Loaded profile config "stopped-upgrade-20220601111410-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:54.109639    5312 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:56.848962    5312 docker.go:137] docker version: linux-20.10.14
	I0601 11:19:56.856670    5312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:58.975744    5312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1190503s)
	I0601 11:19:58.976784    5312 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:19:57.9079355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:58.983935    5312 out.go:177] * Using the docker driver based on user configuration
	I0601 11:19:58.987521    5312 start.go:284] selected driver: docker
	I0601 11:19:58.987601    5312 start.go:806] validating driver "docker" against <nil>
	I0601 11:19:58.987627    5312 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:59.058166    5312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:20:01.150923    5312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0926877s)
	I0601 11:20:01.151149    5312 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:20:00.1112696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:20:01.151149    5312 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:20:01.152120    5312 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:20:01.159218    5312 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:20:01.161134    5312 cni.go:95] Creating CNI manager for ""
	I0601 11:20:01.161134    5312 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:20:01.161134    5312 start_flags.go:306] config:
	{Name:force-systemd-flag-20220601111953-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220601111953-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:01.164760    5312 out.go:177] * Starting control plane node force-systemd-flag-20220601111953-9404 in cluster force-systemd-flag-20220601111953-9404
	I0601 11:20:01.166062    5312 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:20:01.168845    5312 out.go:177] * Pulling base image ...
	I0601 11:20:01.171146    5312 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:20:01.171146    5312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:20:01.171146    5312 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:20:01.171146    5312 cache.go:57] Caching tarball of preloaded images
	I0601 11:20:01.171146    5312 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:20:01.172722    5312 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:20:01.172941    5312 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220601111953-9404\config.json ...
	I0601 11:20:01.173201    5312 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220601111953-9404\config.json: {Name:mk7ec8e31fce3e65c5fd2707c7cc53c961947f3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:02.294327    5312 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:20:02.294383    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:02.294383    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:02.294383    5312 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:20:02.294383    5312 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:20:02.294909    5312 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:20:02.295043    5312 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:20:02.295138    5312 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:20:02.295138    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:04.641008    5312 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-851108038: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-851108038: read-only file system"}
	I0601 11:20:04.641008    5312 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:20:04.641551    5312 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:20:04.641612    5312 start.go:352] acquiring machines lock for force-systemd-flag-20220601111953-9404: {Name:mkd2e1671bd667104ead68be88be376eded12c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:20:04.641961    5312 start.go:356] acquired machines lock for "force-systemd-flag-20220601111953-9404" in 243.4µs
	I0601 11:20:04.642298    5312 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220601111953-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220601111953
-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:20:04.642401    5312 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:20:04.645790    5312 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:20:04.645790    5312 start.go:165] libmachine.API.Create for "force-systemd-flag-20220601111953-9404" (driver="docker")
	I0601 11:20:04.645790    5312 client.go:168] LocalClient.Create starting
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:04.645790    5312 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:04.654804    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:05.734070    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:05.734070    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0792536s)
	I0601 11:20:05.740074    5312 network_create.go:272] running [docker network inspect force-systemd-flag-20220601111953-9404] to gather additional debugging logs...
	I0601 11:20:05.740074    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404
	W0601 11:20:06.819865    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:06.819865    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404: (1.0797783s)
	I0601 11:20:06.819865    5312 network_create.go:275] error running [docker network inspect force-systemd-flag-20220601111953-9404]: docker network inspect force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220601111953-9404
	I0601 11:20:06.819865    5312 network_create.go:277] output of [docker network inspect force-systemd-flag-20220601111953-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220601111953-9404
	
	** /stderr **
	I0601 11:20:06.827374    5312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:07.913994    5312 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0860335s)
	I0601 11:20:07.933275    5312 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000790210] misses:0}
	I0601 11:20:07.933613    5312 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:07.933613    5312 network_create.go:115] attempt to create docker network force-systemd-flag-20220601111953-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:20:07.941432    5312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404
	W0601 11:20:09.041171    5312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:09.041548    5312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: (1.0997269s)
	E0601 11:20:09.041627    5312 network_create.go:104] error while trying to create docker network force-systemd-flag-20220601111953-9404 192.168.49.0/24: create docker network force-systemd-flag-20220601111953-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d8bd490d4126ef13690313558f5e13b8375ad081eb1e3e06a88818a07d428f46 (br-d8bd490d4126): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:20:09.041916    5312 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220601111953-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d8bd490d4126ef13690313558f5e13b8375ad081eb1e3e06a88818a07d428f46 (br-d8bd490d4126): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220601111953-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d8bd490d4126ef13690313558f5e13b8375ad081eb1e3e06a88818a07d428f46 (br-d8bd490d4126): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:20:09.055319    5312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:20:10.144364    5312 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0890324s)
	I0601 11:20:10.151500    5312 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:20:11.238759    5312 cli_runner.go:211] docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:20:11.238759    5312 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0871579s)
	I0601 11:20:11.238890    5312 client.go:171] LocalClient.Create took 6.5929599s
	I0601 11:20:13.252325    5312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:20:13.259198    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:20:14.337417    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:14.337417    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.0782067s)
	I0601 11:20:14.337417    5312 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:14.630754    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:20:15.744178    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:15.744178    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.1134117s)
	W0601 11:20:15.744178    5312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:20:15.744178    5312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:15.755117    5312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:20:15.761151    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:20:16.816420    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:16.816420    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.0552569s)
	I0601 11:20:16.816420    5312 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:17.122427    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:20:18.214301    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:18.214301    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.0918617s)
	W0601 11:20:18.214301    5312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:20:18.214301    5312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:18.214301    5312 start.go:134] duration metric: createHost completed in 13.5717457s
	I0601 11:20:18.214301    5312 start.go:81] releasing machines lock for "force-systemd-flag-20220601111953-9404", held for 13.5721086s
	W0601 11:20:18.214840    5312 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	I0601 11:20:18.228682    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:19.331124    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:19.331124    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1024294s)
	I0601 11:20:19.331124    5312 delete.go:82] Unable to get host status for force-systemd-flag-20220601111953-9404, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	W0601 11:20:19.331124    5312 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	I0601 11:20:19.331124    5312 start.go:614] Will try again in 5 seconds ...
	I0601 11:20:24.346486    5312 start.go:352] acquiring machines lock for force-systemd-flag-20220601111953-9404: {Name:mkd2e1671bd667104ead68be88be376eded12c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:20:24.346925    5312 start.go:356] acquired machines lock for "force-systemd-flag-20220601111953-9404" in 229.8µs
	I0601 11:20:24.347089    5312 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:20:24.347144    5312 fix.go:55] fixHost starting: 
	I0601 11:20:24.360538    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:25.488693    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:25.488693    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1281425s)
	I0601 11:20:25.488693    5312 fix.go:103] recreateIfNeeded on force-systemd-flag-20220601111953-9404: state= err=unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:25.488693    5312 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:20:25.491694    5312 out.go:177] * docker "force-systemd-flag-20220601111953-9404" container is missing, will recreate.
	I0601 11:20:25.494704    5312 delete.go:124] DEMOLISHING force-systemd-flag-20220601111953-9404 ...
	I0601 11:20:25.506693    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:26.623402    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:26.623402    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1166963s)
	W0601 11:20:26.623402    5312 stop.go:75] unable to get state: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:26.623402    5312 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:26.636402    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:27.738903    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:27.738983    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.102318s)
	I0601 11:20:27.738983    5312 delete.go:82] Unable to get host status for force-systemd-flag-20220601111953-9404, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:27.745742    5312 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404
	W0601 11:20:28.866507    5312 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:28.866556    5312 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404: (1.1205446s)
	I0601 11:20:28.866601    5312 kic.go:356] could not find the container force-systemd-flag-20220601111953-9404 to remove it. will try anyways
	I0601 11:20:28.874219    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:29.952404    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:29.952404    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.0781731s)
	W0601 11:20:29.952404    5312 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:29.958380    5312 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-20220601111953-9404 /bin/bash -c "sudo init 0"
	W0601 11:20:31.076087    5312 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-20220601111953-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:20:31.076087    5312 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-flag-20220601111953-9404 /bin/bash -c "sudo init 0": (1.1176947s)
	I0601 11:20:31.076087    5312 oci.go:625] error shutdown force-systemd-flag-20220601111953-9404: docker exec --privileged -t force-systemd-flag-20220601111953-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:32.086781    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:33.165544    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:33.165813    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.0787504s)
	I0601 11:20:33.165919    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:33.165991    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:33.166009    5312 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:33.648926    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:34.737286    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:34.737286    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.0883474s)
	I0601 11:20:34.737286    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:34.737286    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:34.737286    5312 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:35.640316    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:36.756889    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:36.756938    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1163522s)
	I0601 11:20:36.757009    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:36.757009    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:36.757009    5312 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:37.407043    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:38.520776    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:38.520776    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1137206s)
	I0601 11:20:38.520776    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:38.520776    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:38.520776    5312 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:39.644199    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:40.790158    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:40.790158    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1459457s)
	I0601 11:20:40.790158    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:40.790158    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:40.790158    5312 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:42.324346    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:43.724869    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:43.724869    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.4005073s)
	I0601 11:20:43.724869    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:43.724869    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:43.724869    5312 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:46.786360    5312 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}
	W0601 11:20:47.903482    5312 cli_runner.go:211] docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:47.903482    5312 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: (1.1169815s)
	I0601 11:20:47.903482    5312 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:20:47.903482    5312 oci.go:639] temporary error: container force-systemd-flag-20220601111953-9404 status is  but expect it to be exited
	I0601 11:20:47.903482    5312 oci.go:88] couldn't shut down force-systemd-flag-20220601111953-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	 
	I0601 11:20:47.909485    5312 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-20220601111953-9404
	I0601 11:20:49.018610    5312 cli_runner.go:217] Completed: docker rm -f -v force-systemd-flag-20220601111953-9404: (1.1091125s)
	I0601 11:20:49.026727    5312 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404
	W0601 11:20:50.095655    5312 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:50.095655    5312 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-flag-20220601111953-9404: (1.0689162s)
	I0601 11:20:50.103044    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:51.198733    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:51.199045    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0956768s)
	I0601 11:20:51.203591    5312 network_create.go:272] running [docker network inspect force-systemd-flag-20220601111953-9404] to gather additional debugging logs...
	I0601 11:20:51.203591    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404
	W0601 11:20:52.255917    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:52.255917    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404: (1.0523137s)
	I0601 11:20:52.255917    5312 network_create.go:275] error running [docker network inspect force-systemd-flag-20220601111953-9404]: docker network inspect force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220601111953-9404
	I0601 11:20:52.255917    5312 network_create.go:277] output of [docker network inspect force-systemd-flag-20220601111953-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220601111953-9404
	
	** /stderr **
	W0601 11:20:52.257608    5312 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:20:52.257701    5312 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:20:53.266920    5312 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:20:53.272788    5312 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:20:53.272788    5312 start.go:165] libmachine.API.Create for "force-systemd-flag-20220601111953-9404" (driver="docker")
	I0601 11:20:53.272788    5312 client.go:168] LocalClient.Create starting
	I0601 11:20:53.273397    5312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:20:53.274048    5312 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:53.274048    5312 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:53.274048    5312 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:20:53.274713    5312 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:53.274713    5312 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:53.283181    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:54.358551    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:54.358551    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0753579s)
	I0601 11:20:54.366124    5312 network_create.go:272] running [docker network inspect force-systemd-flag-20220601111953-9404] to gather additional debugging logs...
	I0601 11:20:54.366124    5312 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220601111953-9404
	W0601 11:20:55.466917    5312 cli_runner.go:211] docker network inspect force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:55.466917    5312 cli_runner.go:217] Completed: docker network inspect force-systemd-flag-20220601111953-9404: (1.1007811s)
	I0601 11:20:55.467013    5312 network_create.go:275] error running [docker network inspect force-systemd-flag-20220601111953-9404]: docker network inspect force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220601111953-9404
	I0601 11:20:55.467013    5312 network_create.go:277] output of [docker network inspect force-systemd-flag-20220601111953-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220601111953-9404
	
	** /stderr **
	I0601 11:20:55.475238    5312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:56.553447    5312 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0780892s)
	I0601 11:20:56.569298    5312 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790210] amended:false}} dirty:map[] misses:0}
	I0601 11:20:56.569298    5312 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:56.582641    5312 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000790210] amended:true}} dirty:map[192.168.49.0:0xc000790210 192.168.58.0:0xc00070e5a8] misses:0}
	I0601 11:20:56.582641    5312 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:56.582641    5312 network_create.go:115] attempt to create docker network force-systemd-flag-20220601111953-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:20:56.590571    5312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404
	W0601 11:20:57.686135    5312 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:20:57.686295    5312 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: (1.0954987s)
	E0601 11:20:57.686327    5312 network_create.go:104] error while trying to create docker network force-systemd-flag-20220601111953-9404 192.168.58.0/24: create docker network force-systemd-flag-20220601111953-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9fb9f3c5dd294ecdab15a30f29b4e5a388cff3bb2fdd1d102ccb3917aa98778b (br-9fb9f3c5dd29): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:20:57.686327    5312 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220601111953-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9fb9f3c5dd294ecdab15a30f29b4e5a388cff3bb2fdd1d102ccb3917aa98778b (br-9fb9f3c5dd29): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220601111953-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9fb9f3c5dd294ecdab15a30f29b4e5a388cff3bb2fdd1d102ccb3917aa98778b (br-9fb9f3c5dd29): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:20:57.699815    5312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:20:58.782759    5312 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0828859s)
	I0601 11:20:58.789594    5312 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:20:59.869835    5312 cli_runner.go:211] docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:20:59.869835    5312 cli_runner.go:217] Completed: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0802288s)
	I0601 11:20:59.869835    5312 client.go:171] LocalClient.Create took 6.5969724s
	I0601 11:21:01.884850    5312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:21:01.891531    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:03.022411    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:03.022465    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.1308115s)
	I0601 11:21:03.022608    5312 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:03.371512    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:04.521998    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:04.522122    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.150473s)
	W0601 11:21:04.522184    5312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:21:04.522184    5312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:04.534668    5312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:21:04.542406    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:05.640785    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:05.640785    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.098134s)
	I0601 11:21:05.640785    5312 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:05.882991    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:06.994044    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:06.994044    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.11104s)
	W0601 11:21:06.994044    5312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:21:06.994044    5312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:06.994044    5312 start.go:134] duration metric: createHost completed in 13.7267742s
	I0601 11:21:07.005373    5312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:21:07.011524    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:08.136996    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:08.137031    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.1253506s)
	I0601 11:21:08.137031    5312 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:08.395184    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:09.481482    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:09.481482    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.0862854s)
	W0601 11:21:09.481482    5312 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:21:09.481482    5312 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:09.490476    5312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:21:09.496472    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:10.610842    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:10.611049    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.1142455s)
	I0601 11:21:10.611204    5312 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:10.831702    5312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404
	W0601 11:21:11.952229    5312 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404 returned with exit code 1
	I0601 11:21:11.952317    5312 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: (1.1204353s)
	W0601 11:21:11.952463    5312 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	W0601 11:21:11.952463    5312 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220601111953-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220601111953-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	I0601 11:21:11.952463    5312 fix.go:57] fixHost completed within 47.6047773s
	I0601 11:21:11.952463    5312 start.go:81] releasing machines lock for "force-systemd-flag-20220601111953-9404", held for 47.6049963s
	W0601 11:21:11.953063    5312 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220601111953-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220601111953-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	I0601 11:21:11.966831    5312 out.go:177] 
	W0601 11:21:11.969733    5312 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220601111953-9404 container: docker volume create force-systemd-flag-20220601111953-9404 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220601111953-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220601111953-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220601111953-9404': mkdir /var/lib/docker/volumes/force-systemd-flag-20220601111953-9404: read-only file system
	
	W0601 11:21:11.969733    5312 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:21:11.969733    5312 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:21:11.973620    5312 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20220601111953-9404 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220601111953-9404 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-20220601111953-9404 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.3065947s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-20220601111953-9404 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-06-01 11:21:15.4004832 +0000 GMT m=+3485.660791801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220601111953-9404

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20220601111953-9404: exit status 1 (1.1506923s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-flag-20220601111953-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220601111953-9404 -n force-systemd-flag-20220601111953-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220601111953-9404 -n force-systemd-flag-20220601111953-9404: exit status 7 (2.9983681s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:21:19.528953    6372 status.go:247] status error: host: state: unknown state "force-systemd-flag-20220601111953-9404": docker container inspect force-systemd-flag-20220601111953-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220601111953-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220601111953-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601111953-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220601111953-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220601111953-9404: (8.6158376s)
--- FAIL: TestForceSystemdFlag (94.43s)

                                                
                                    
x
+
TestForceSystemdEnv (94.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220601112038-9404 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20220601112038-9404 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 60 (1m18.3248741s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node force-systemd-env-20220601112038-9404 in cluster force-systemd-env-20220601112038-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20220601112038-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:20:38.704082    3204 out.go:296] Setting OutFile to fd 832 ...
	I0601 11:20:38.773845    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:20:38.773845    3204 out.go:309] Setting ErrFile to fd 1528...
	I0601 11:20:38.773845    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:20:38.788539    3204 out.go:303] Setting JSON to false
	I0601 11:20:38.795164    3204 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14374,"bootTime":1654068064,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:20:38.795234    3204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:20:38.803700    3204 out.go:177] * [force-systemd-env-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:20:38.812569    3204 notify.go:193] Checking for updates...
	I0601 11:20:38.815910    3204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:20:38.819069    3204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:20:38.830509    3204 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:20:38.835505    3204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:20:38.837886    3204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0601 11:20:38.840961    3204 config.go:178] Loaded profile config "force-systemd-flag-20220601111953-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:20:38.841543    3204 config.go:178] Loaded profile config "kubernetes-upgrade-20220601111922-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:20:38.841543    3204 config.go:178] Loaded profile config "missing-upgrade-20220601111541-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:20:38.842156    3204 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:20:38.842156    3204 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:20:41.592669    3204 docker.go:137] docker version: linux-20.10.14
	I0601 11:20:41.606181    3204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:20:43.740870    3204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1346643s)
	I0601 11:20:43.740870    3204 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:20:42.657999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:20:43.745877    3204 out.go:177] * Using the docker driver based on user configuration
	I0601 11:20:43.752870    3204 start.go:284] selected driver: docker
	I0601 11:20:43.752870    3204 start.go:806] validating driver "docker" against <nil>
	I0601 11:20:43.752870    3204 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:20:43.828499    3204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:20:45.961969    3204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1209055s)
	I0601 11:20:45.962356    3204 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:20:44.8871869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:20:45.962628    3204 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:20:45.963660    3204 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:20:45.966717    3204 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:20:45.968883    3204 cni.go:95] Creating CNI manager for ""
	I0601 11:20:45.968883    3204 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:20:45.968998    3204 start_flags.go:306] config:
	{Name:force-systemd-env-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220601112038-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:45.973393    3204 out.go:177] * Starting control plane node force-systemd-env-20220601112038-9404 in cluster force-systemd-env-20220601112038-9404
	I0601 11:20:45.976013    3204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:20:45.977965    3204 out.go:177] * Pulling base image ...
	I0601 11:20:45.981849    3204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:20:45.981849    3204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:20:45.981849    3204 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:20:45.981849    3204 cache.go:57] Caching tarball of preloaded images
	I0601 11:20:45.981849    3204 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:20:45.981849    3204 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:20:45.981849    3204 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220601112038-9404\config.json ...
	I0601 11:20:45.982843    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220601112038-9404\config.json: {Name:mk57072529fec505c626a877c123bb9c655087a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:47.100489    3204 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:20:47.100489    3204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:47.100489    3204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:47.100489    3204 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:20:47.100489    3204 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:20:47.100489    3204 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:20:47.101163    3204 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:20:47.101227    3204 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:20:47.101275    3204 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:49.488443    3204 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-911501288: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-911501288: read-only file system"}
	I0601 11:20:49.488971    3204 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:20:49.488971    3204 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:20:49.489089    3204 start.go:352] acquiring machines lock for force-systemd-env-20220601112038-9404: {Name:mkdac0f665c4f150b100c80ec92d136085ad4e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:20:49.489315    3204 start.go:356] acquired machines lock for "force-systemd-env-20220601112038-9404" in 226µs
	I0601 11:20:49.489561    3204 start.go:91] Provisioning new machine with config: &{Name:force-systemd-env-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-env-20220601112038-9
404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:20:49.489787    3204 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:20:49.629925    3204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:20:49.630727    3204 start.go:165] libmachine.API.Create for "force-systemd-env-20220601112038-9404" (driver="docker")
	I0601 11:20:49.630727    3204 client.go:168] LocalClient.Create starting
	I0601 11:20:49.631308    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:20:49.631454    3204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:49.631586    3204 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:49.631728    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:20:49.631958    3204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:49.632023    3204 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:49.643773    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:50.713957    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:50.714035    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0701166s)
	I0601 11:20:50.719120    3204 network_create.go:272] running [docker network inspect force-systemd-env-20220601112038-9404] to gather additional debugging logs...
	I0601 11:20:50.719120    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404
	W0601 11:20:51.783251    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:20:51.783492    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404: (1.0641191s)
	I0601 11:20:51.783525    3204 network_create.go:275] error running [docker network inspect force-systemd-env-20220601112038-9404]: docker network inspect force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220601112038-9404
	I0601 11:20:51.783525    3204 network_create.go:277] output of [docker network inspect force-systemd-env-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220601112038-9404
	
	** /stderr **
	I0601 11:20:51.791392    3204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:52.841623    3204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.050118s)
	I0601 11:20:52.861896    3204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00071e0b8] misses:0}
	I0601 11:20:52.862937    3204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:52.862937    3204 network_create.go:115] attempt to create docker network force-systemd-env-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:20:52.870437    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404
	W0601 11:20:53.904450    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:20:53.904596    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: (1.0340018s)
	E0601 11:20:53.904596    3204 network_create.go:104] error while trying to create docker network force-systemd-env-20220601112038-9404 192.168.49.0/24: create docker network force-systemd-env-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c69422ecbccf7eb9d453c20414df222b3ecc478d2dc7e95101ed782a53d61892 (br-c69422ecbccf): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:20:53.904596    3204 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c69422ecbccf7eb9d453c20414df222b3ecc478d2dc7e95101ed782a53d61892 (br-c69422ecbccf): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c69422ecbccf7eb9d453c20414df222b3ecc478d2dc7e95101ed782a53d61892 (br-c69422ecbccf): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:20:53.917392    3204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:20:55.016324    3204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0987665s)
	I0601 11:20:55.023199    3204 cli_runner.go:164] Run: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:20:56.137948    3204 cli_runner.go:211] docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:20:56.138093    3204 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1145239s)
	I0601 11:20:56.138093    3204 client.go:171] LocalClient.Create took 6.5071934s
	I0601 11:20:58.163390    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:20:58.171198    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:20:59.221812    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:20:59.221855    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0503669s)
	I0601 11:20:59.222026    3204 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:20:59.514338    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:00.607319    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:00.607396    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0927997s)
	W0601 11:21:00.607555    3204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:00.607590    3204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:00.618713    3204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:21:00.624602    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:01.682712    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:01.682712    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0580978s)
	I0601 11:21:01.682712    3204 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:01.992944    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:03.100086    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:03.100086    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.1071296s)
	W0601 11:21:03.100086    3204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:03.100086    3204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:03.100086    3204 start.go:134] duration metric: createHost completed in 13.6101442s
	I0601 11:21:03.100086    3204 start.go:81] releasing machines lock for "force-systemd-env-20220601112038-9404", held for 13.6106162s
	W0601 11:21:03.100086    3204 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	I0601 11:21:03.116240    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:04.241046    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:04.241046    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1243762s)
	I0601 11:21:04.241046    3204 delete.go:82] Unable to get host status for force-systemd-env-20220601112038-9404, assuming it has already been deleted: state: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	W0601 11:21:04.241046    3204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	I0601 11:21:04.241046    3204 start.go:614] Will try again in 5 seconds ...
	I0601 11:21:09.250630    3204 start.go:352] acquiring machines lock for force-systemd-env-20220601112038-9404: {Name:mkdac0f665c4f150b100c80ec92d136085ad4e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:21:09.250901    3204 start.go:356] acquired machines lock for "force-systemd-env-20220601112038-9404" in 271.7µs
	I0601 11:21:09.251058    3204 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:21:09.251058    3204 fix.go:55] fixHost starting: 
	I0601 11:21:09.266391    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:10.441446    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:10.441654    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1750421s)
	I0601 11:21:10.441872    3204 fix.go:103] recreateIfNeeded on force-systemd-env-20220601112038-9404: state= err=unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:10.441897    3204 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:21:10.444910    3204 out.go:177] * docker "force-systemd-env-20220601112038-9404" container is missing, will recreate.
	I0601 11:21:10.448267    3204 delete.go:124] DEMOLISHING force-systemd-env-20220601112038-9404 ...
	I0601 11:21:10.459839    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:11.559087    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:11.559228    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.0991115s)
	W0601 11:21:11.559331    3204 stop.go:75] unable to get state: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:11.559385    3204 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:11.575436    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:12.726591    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:12.726737    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1509856s)
	I0601 11:21:12.726864    3204 delete.go:82] Unable to get host status for force-systemd-env-20220601112038-9404, assuming it has already been deleted: state: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:12.734273    3204 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404
	W0601 11:21:13.821281    3204 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:13.821281    3204 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404: (1.0869957s)
	I0601 11:21:13.821281    3204 kic.go:356] could not find the container force-systemd-env-20220601112038-9404 to remove it. will try anyways
	I0601 11:21:13.828733    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:14.921384    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:14.921524    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.0925951s)
	W0601 11:21:14.921571    3204 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:14.929101    3204 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-20220601112038-9404 /bin/bash -c "sudo init 0"
	W0601 11:21:16.070716    3204 cli_runner.go:211] docker exec --privileged -t force-systemd-env-20220601112038-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:21:16.070716    3204 cli_runner.go:217] Completed: docker exec --privileged -t force-systemd-env-20220601112038-9404 /bin/bash -c "sudo init 0": (1.141602s)
	I0601 11:21:16.070716    3204 oci.go:625] error shutdown force-systemd-env-20220601112038-9404: docker exec --privileged -t force-systemd-env-20220601112038-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:17.088765    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:18.176950    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:18.176950    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.0881734s)
	I0601 11:21:18.176950    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:18.176950    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:18.176950    3204 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:18.660906    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:19.821294    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:19.821294    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.160375s)
	I0601 11:21:19.821294    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:19.821294    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:19.821294    3204 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:20.731349    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:21.890688    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:21.890761    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.159005s)
	I0601 11:21:21.890968    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:21.890968    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:21.891138    3204 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:22.546809    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:23.675567    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:23.675648    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1285364s)
	I0601 11:21:23.675704    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:23.675770    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:23.675770    3204 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:24.800776    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:25.912608    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:25.912608    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1118195s)
	I0601 11:21:25.912608    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:25.912608    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:25.912608    3204 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:27.440657    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:28.549334    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:28.549334    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.1086638s)
	I0601 11:21:28.549334    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:28.549334    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:28.549334    3204 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:31.603941    3204 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}
	W0601 11:21:32.702483    3204 cli_runner.go:211] docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:21:32.702483    3204 cli_runner.go:217] Completed: docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: (1.09853s)
	I0601 11:21:32.702483    3204 oci.go:637] temporary error verifying shutdown: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:32.702483    3204 oci.go:639] temporary error: container force-systemd-env-20220601112038-9404 status is  but expect it to be exited
	I0601 11:21:32.702483    3204 oci.go:88] couldn't shut down force-systemd-env-20220601112038-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	 
	I0601 11:21:32.711271    3204 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-20220601112038-9404
	I0601 11:21:33.816319    3204 cli_runner.go:217] Completed: docker rm -f -v force-systemd-env-20220601112038-9404: (1.1049705s)
	I0601 11:21:33.824585    3204 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404
	W0601 11:21:34.905193    3204 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:34.905265    3204 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} force-systemd-env-20220601112038-9404: (1.0804209s)
	I0601 11:21:34.913283    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:21:36.064006    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:21:36.064006    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1507102s)
	I0601 11:21:36.071602    3204 network_create.go:272] running [docker network inspect force-systemd-env-20220601112038-9404] to gather additional debugging logs...
	I0601 11:21:36.071602    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404
	W0601 11:21:37.234853    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:37.234853    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404: (1.1631317s)
	I0601 11:21:37.234853    3204 network_create.go:275] error running [docker network inspect force-systemd-env-20220601112038-9404]: docker network inspect force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220601112038-9404
	I0601 11:21:37.234853    3204 network_create.go:277] output of [docker network inspect force-systemd-env-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220601112038-9404
	
	** /stderr **
	W0601 11:21:37.235576    3204 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:21:37.235576    3204 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:21:38.250458    3204 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:21:38.255405    3204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:21:38.255733    3204 start.go:165] libmachine.API.Create for "force-systemd-env-20220601112038-9404" (driver="docker")
	I0601 11:21:38.255775    3204 client.go:168] LocalClient.Create starting
	I0601 11:21:38.256092    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:21:38.256092    3204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:21:38.256092    3204 main.go:134] libmachine: Parsing certificate...
	I0601 11:21:38.256688    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:21:38.256845    3204 main.go:134] libmachine: Decoding PEM data...
	I0601 11:21:38.256919    3204 main.go:134] libmachine: Parsing certificate...
	I0601 11:21:38.266587    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:21:39.361188    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:21:39.361297    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0945879s)
	I0601 11:21:39.371887    3204 network_create.go:272] running [docker network inspect force-systemd-env-20220601112038-9404] to gather additional debugging logs...
	I0601 11:21:39.371887    3204 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220601112038-9404
	W0601 11:21:40.475848    3204 cli_runner.go:211] docker network inspect force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:40.475848    3204 cli_runner.go:217] Completed: docker network inspect force-systemd-env-20220601112038-9404: (1.103948s)
	I0601 11:21:40.475848    3204 network_create.go:275] error running [docker network inspect force-systemd-env-20220601112038-9404]: docker network inspect force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220601112038-9404
	I0601 11:21:40.475848    3204 network_create.go:277] output of [docker network inspect force-systemd-env-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220601112038-9404
	
	** /stderr **
	I0601 11:21:40.481859    3204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:21:41.589629    3204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1076002s)
	I0601 11:21:41.607613    3204 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071e0b8] amended:false}} dirty:map[] misses:0}
	I0601 11:21:41.607613    3204 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:21:41.624773    3204 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071e0b8] amended:true}} dirty:map[192.168.49.0:0xc00071e0b8 192.168.58.0:0xc0000069c0] misses:0}
	I0601 11:21:41.624773    3204 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:21:41.624773    3204 network_create.go:115] attempt to create docker network force-systemd-env-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:21:41.634540    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404
	W0601 11:21:42.750717    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:42.750798    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: (1.1159966s)
	E0601 11:21:42.750798    3204 network_create.go:104] error while trying to create docker network force-systemd-env-20220601112038-9404 192.168.58.0/24: create docker network force-systemd-env-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fbddc740938c14189777e6fdba8daaf7d812eac0e99c0e68052e08fed44571ab (br-fbddc740938c): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:21:42.751044    3204 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fbddc740938c14189777e6fdba8daaf7d812eac0e99c0e68052e08fed44571ab (br-fbddc740938c): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fbddc740938c14189777e6fdba8daaf7d812eac0e99c0e68052e08fed44571ab (br-fbddc740938c): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:21:42.764899    3204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:21:43.838229    3204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0732886s)
	I0601 11:21:43.845928    3204 cli_runner.go:164] Run: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:21:44.868889    3204 cli_runner.go:211] docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:21:44.868889    3204 cli_runner.go:217] Completed: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0229497s)
	I0601 11:21:44.868889    3204 client.go:171] LocalClient.Create took 6.6129812s
	I0601 11:21:46.889475    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:21:46.895771    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:47.969944    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:47.969944    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0741602s)
	I0601 11:21:47.969944    3204 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:48.308441    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:49.398102    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:49.398102    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0896482s)
	W0601 11:21:49.398102    3204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:49.398102    3204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:49.407127    3204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:21:49.414111    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:50.555478    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:50.555478    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.1413541s)
	I0601 11:21:50.555478    3204 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:50.785157    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:51.881382    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:51.881382    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0962125s)
	W0601 11:21:51.881382    3204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:51.881382    3204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:51.881382    3204 start.go:134] duration metric: createHost completed in 13.6305674s
	I0601 11:21:51.892805    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:21:51.898947    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:53.001424    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:53.001424    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.1024643s)
	I0601 11:21:53.001424    3204 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:53.262687    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:54.369280    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:54.369341    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.1065802s)
	W0601 11:21:54.369341    3204 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:54.369341    3204 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:54.379879    3204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:21:54.386291    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:55.458485    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:55.458485    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0721816s)
	I0601 11:21:55.458485    3204 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:55.674053    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404
	W0601 11:21:56.736850    3204 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404 returned with exit code 1
	I0601 11:21:56.736908    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: (1.0627274s)
	W0601 11:21:56.736908    3204 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	W0601 11:21:56.736908    3204 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	I0601 11:21:56.736908    3204 fix.go:57] fixHost completed within 47.4853086s
	I0601 11:21:56.736908    3204 start.go:81] releasing machines lock for "force-systemd-env-20220601112038-9404", held for 47.4854657s
	W0601 11:21:56.737554    3204 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	I0601 11:21:56.742527    3204 out.go:177] 
	W0601 11:21:56.745296    3204 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220601112038-9404 container: docker volume create force-systemd-env-20220601112038-9404 --label name.minikube.sigs.k8s.io=force-systemd-env-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220601112038-9404': mkdir /var/lib/docker/volumes/force-systemd-env-20220601112038-9404: read-only file system
	
	W0601 11:21:56.745602    3204 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:21:56.745797    3204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:21:56.749154    3204 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:152: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20220601112038-9404 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220601112038-9404 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20220601112038-9404 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (3.2203567s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20220601112038-9404 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:161: *** TestForceSystemdEnv FAILED at 2022-06-01 11:22:00.0855227 +0000 GMT m=+3530.345321901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220601112038-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20220601112038-9404: exit status 1 (1.1900081s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-env-20220601112038-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220601112038-9404 -n force-systemd-env-20220601112038-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220601112038-9404 -n force-systemd-env-20220601112038-9404: exit status 7 (3.0639658s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:22:04.318912    1768 status.go:247] status error: host: state: unknown state "force-systemd-env-20220601112038-9404": docker container inspect force-systemd-env-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220601112038-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20220601112038-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20220601112038-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220601112038-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220601112038-9404: (8.614504s)
--- FAIL: TestForceSystemdEnv (94.51s)

                                                
                                    
x
+
TestErrorSpam/setup (73.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220601102633-9404 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 --driver=docker
error_spam_test.go:78: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-20220601102633-9404 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 --driver=docker: exit status 60 (1m13.6614901s)

                                                
                                                
-- stdout --
	* [nospam-20220601102633-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node nospam-20220601102633-9404 in cluster nospam-20220601102633-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20220601102633-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:26:47.395098    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.49.0/24: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system
	
	E0601 10:27:33.585924    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.58.0/24: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p nospam-20220601102633-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
error_spam_test.go:80: "out/minikube-windows-amd64.exe start -p nospam-20220601102633-9404 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 --driver=docker" failed: exit status 60
error_spam_test.go:93: unexpected stderr: "E0601 10:26:47.395098    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.49.0/24: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system"
error_spam_test.go:93: unexpected stderr: "E0601 10:27:33.585924    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.58.0/24: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4"
error_spam_test.go:93: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20220601102633-9404\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system"
error_spam_test.go:93: unexpected stderr: "X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:93: unexpected stderr: "stdout:"
error_spam_test.go:93: unexpected stderr: "stderr:"
error_spam_test.go:93: unexpected stderr: "Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system"
error_spam_test.go:93: unexpected stderr: "* Suggestion: Restart Docker"
error_spam_test.go:93: unexpected stderr: "* Related issue: https://github.com/kubernetes/minikube/issues/6825"
error_spam_test.go:107: minikube stdout:
* [nospam-20220601102633-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14079
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node nospam-20220601102633-9404 in cluster nospam-20220601102633-9404
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20220601102633-9404" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:108: minikube stderr:
E0601 10:26:47.395098    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.49.0/24: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 198961d519e44b9e70e41217fcc3d6c85f87b5a4789167c47652a7c103a66b72 (br-198961d519e4): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system

                                                
                                                
E0601 10:27:33.585924    4032 network_create.go:104] error while trying to create docker network nospam-20220601102633-9404 192.168.58.0/24: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220601102633-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20220601102633-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 674218ce6aec5d5517b3db3cc6cfbb09641bd01cd29f6b2f29f3ea9ebdea4914 (br-674218ce6aec): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p nospam-20220601102633-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220601102633-9404 container: docker volume create nospam-20220601102633-9404 --label name.minikube.sigs.k8s.io=nospam-20220601102633-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220601102633-9404: error while creating volume root path '/var/lib/docker/volumes/nospam-20220601102633-9404': mkdir /var/lib/docker/volumes/nospam-20220601102633-9404: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
error_spam_test.go:118: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:118: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:118: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (73.66s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: exit status 60 (1m14.8019121s)

                                                
                                                
-- stdout --
	* [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220601102952-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	E0601 10:30:07.342059    4628 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e95f73e5621c887895b91aa717e30a3b37beb54362298abfa9ee4680b589c919 (br-e95f73e5621c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e95f73e5621c887895b91aa717e30a3b37beb54362298abfa9ee4680b589c919 (br-e95f73e5621c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
	E0601 10:30:53.946844    4628 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0a1b850887f0243ae932a55d76ed497d3b731c67ae0ca5e80d539aa9f3526f7f (br-0a1b850887f0): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0a1b850887f0243ae932a55d76ed497d3b731c67ae0ca5e80d539aa9f3526f7f (br-0a1b850887f0): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:2162: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker": exit status 60
functional_test.go:2167: start stdout=* [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14079
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with the root privilege
* Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20220601102952-9404" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2172: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
E0601 10:30:07.342059    4628 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network e95f73e5621c887895b91aa717e30a3b37beb54362298abfa9ee4680b589c919 (br-e95f73e5621c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network e95f73e5621c887895b91aa717e30a3b37beb54362298abfa9ee4680b589c919 (br-e95f73e5621c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:49974 to docker env.
E0601 10:30:53.946844    4628 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 0a1b850887f0243ae932a55d76ed497d3b731c67ae0ca5e80d539aa9f3526f7f (br-0a1b850887f0): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 0a1b850887f0243ae932a55d76ed497d3b731c67ae0ca5e80d539aa9f3526f7f (br-0a1b850887f0): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1145998s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.7808473s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:31:11.298777    9676 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (78.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:630: audit.json does not contain the profile "functional-20220601102952-9404"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (114.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --alsologtostderr -v=8
functional_test.go:651: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --alsologtostderr -v=8: exit status 60 (1m49.8278053s)

                                                
                                                
-- stdout --
	* [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
	* Pulling base image ...
	* docker "functional-20220601102952-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220601102952-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:31:11.558389    9440 out.go:296] Setting OutFile to fd 728 ...
	I0601 10:31:11.613133    9440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:31:11.613133    9440 out.go:309] Setting ErrFile to fd 260...
	I0601 10:31:11.613133    9440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:31:11.627138    9440 out.go:303] Setting JSON to false
	I0601 10:31:11.629131    9440 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11407,"bootTime":1654068064,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:31:11.629131    9440 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:31:11.633150    9440 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:31:11.637126    9440 notify.go:193] Checking for updates...
	I0601 10:31:11.639130    9440 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:31:11.641136    9440 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:31:11.644127    9440 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:31:11.648655    9440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:31:11.651987    9440 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:31:11.651987    9440 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:31:14.218074    9440 docker.go:137] docker version: linux-20.10.14
	I0601 10:31:14.225021    9440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:31:16.172826    9440 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.947606s)
	I0601 10:31:16.173622    9440 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:31:15.1821752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:31:16.177421    9440 out.go:177] * Using the docker driver based on existing profile
	I0601 10:31:16.179664    9440 start.go:284] selected driver: docker
	I0601 10:31:16.179810    9440 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:31:16.179810    9440 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:31:16.199028    9440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:31:18.198778    9440 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9997275s)
	I0601 10:31:18.198778    9440 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:31:17.1965842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:31:18.259392    9440 cni.go:95] Creating CNI manager for ""
	I0601 10:31:18.259460    9440 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:31:18.259541    9440 start_flags.go:306] config:
	{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:31:18.264019    9440 out.go:177] * Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
	I0601 10:31:18.284384    9440 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:31:18.287067    9440 out.go:177] * Pulling base image ...
	I0601 10:31:18.292368    9440 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 10:31:18.292501    9440 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:31:18.292667    9440 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 10:31:18.292667    9440 cache.go:57] Caching tarball of preloaded images
	I0601 10:31:18.292667    9440 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 10:31:18.292667    9440 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 10:31:18.295091    9440 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220601102952-9404\config.json ...
	I0601 10:31:19.348275    9440 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:31:19.348275    9440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:31:19.348275    9440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:31:19.348275    9440 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:31:19.348805    9440 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:31:19.348921    9440 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:31:19.348957    9440 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 10:31:19.348957    9440 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 10:31:19.348957    9440 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:31:21.580330    9440 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-969572448: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-969572448: read-only file system"}
	I0601 10:31:21.580429    9440 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 10:31:21.580429    9440 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:31:21.580546    9440 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:31:21.580815    9440 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 269.6µs
	I0601 10:31:21.581060    9440 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:31:21.581060    9440 fix.go:55] fixHost starting: 
	I0601 10:31:21.595977    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:22.625079    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:22.625079    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0290905s)
	I0601 10:31:22.625079    9440 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:22.625079    9440 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:31:22.629237    9440 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
	I0601 10:31:22.631545    9440 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
	I0601 10:31:22.645487    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:23.657460    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:23.657460    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0119334s)
	W0601 10:31:23.657460    9440 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:23.657460    9440 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:23.671050    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:24.703752    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:24.703949    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0326908s)
	I0601 10:31:24.704020    9440 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:24.712427    9440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:31:25.768166    9440 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:31:25.768166    9440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.055727s)
	I0601 10:31:25.768166    9440 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
	I0601 10:31:25.775693    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:26.782703    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:26.782703    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0067815s)
	W0601 10:31:26.782703    9440 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:26.791482    9440 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
	W0601 10:31:27.788126    9440 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:31:27.788126    9440 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:28.800239    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:29.855474    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:29.855474    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.055224s)
	I0601 10:31:29.855474    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:29.855474    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:29.855474    9440 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:30.428130    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:31.438081    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:31.438081    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0098152s)
	I0601 10:31:31.438081    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:31.438081    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:31.438081    9440 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:32.537316    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:33.580607    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:33.580607    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0432093s)
	I0601 10:31:33.580607    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:33.580607    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:33.580607    9440 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:34.907517    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:35.929906    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:35.929906    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0218427s)
	I0601 10:31:35.929906    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:35.929906    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:35.929906    9440 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:37.530949    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:38.541625    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:38.541625    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0104441s)
	I0601 10:31:38.541625    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:38.541625    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:38.541625    9440 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:40.904404    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:41.921576    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:41.921576    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0171602s)
	I0601 10:31:41.921576    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:41.921576    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:41.921576    9440 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:46.449387    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:31:47.501658    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:31:47.501658    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0522593s)
	I0601 10:31:47.501658    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:31:47.501658    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:31:47.501658    9440 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	 
	I0601 10:31:47.510460    9440 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
	I0601 10:31:48.543961    9440 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.0333364s)
	I0601 10:31:48.553234    9440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:31:49.592742    9440 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:31:49.592742    9440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0394974s)
	I0601 10:31:49.599390    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:31:50.628223    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:31:50.628223    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0278371s)
	I0601 10:31:50.637244    9440 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:31:50.637521    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:31:51.675874    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:31:51.675907    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0381878s)
	I0601 10:31:51.675959    9440 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:31:51.675959    9440 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	W0601 10:31:51.677117    9440 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:31:51.677227    9440 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:31:52.690458    9440 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:31:52.693756    9440 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:31:52.694143    9440 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
	I0601 10:31:52.694175    9440 client.go:168] LocalClient.Create starting
	I0601 10:31:52.694885    9440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:31:52.695246    9440 main.go:134] libmachine: Decoding PEM data...
	I0601 10:31:52.695327    9440 main.go:134] libmachine: Parsing certificate...
	I0601 10:31:52.695446    9440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:31:52.695446    9440 main.go:134] libmachine: Decoding PEM data...
	I0601 10:31:52.695446    9440 main.go:134] libmachine: Parsing certificate...
	I0601 10:31:52.705041    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:31:53.727422    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:31:53.727479    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0222652s)
	I0601 10:31:53.734780    9440 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:31:53.735354    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:31:54.755869    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:31:54.755937    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.020394s)
	I0601 10:31:54.755964    9440 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:31:54.755964    9440 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	I0601 10:31:54.763162    9440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:31:55.785296    9440 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.022123s)
	I0601 10:31:55.803600    9440 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060a110] misses:0}
	I0601 10:31:55.803600    9440 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:31:55.803783    9440 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 10:31:55.809156    9440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
	W0601 10:31:56.856683    9440 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
	I0601 10:31:56.856683    9440 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0474248s)
	E0601 10:31:56.856683    9440 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9ad7dbdbcc1fb6701d815f888b44df9c8ad446e4604a0dd13ead1e0c009f9b11 (br-9ad7dbdbcc1f): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 10:31:56.856683    9440 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9ad7dbdbcc1fb6701d815f888b44df9c8ad446e4604a0dd13ead1e0c009f9b11 (br-9ad7dbdbcc1f): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9ad7dbdbcc1fb6701d815f888b44df9c8ad446e4604a0dd13ead1e0c009f9b11 (br-9ad7dbdbcc1f): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 10:31:56.870221    9440 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:31:57.885881    9440 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0156488s)
	I0601 10:31:57.893905    9440 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:31:58.896941    9440 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:31:58.896941    9440 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0030245s)
	I0601 10:31:58.896941    9440 client.go:171] LocalClient.Create took 6.2026976s
	I0601 10:32:00.911634    9440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:32:00.917169    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:01.949512    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:01.949512    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0323314s)
	I0601 10:32:01.949512    9440 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:02.129764    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:03.216928    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:03.216964    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0869617s)
	W0601 10:32:03.217087    9440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:03.217087    9440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:03.227844    9440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:32:03.234728    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:04.277699    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:04.277857    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0428279s)
	I0601 10:32:04.278071    9440 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:04.488746    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:05.506683    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:05.506763    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0178256s)
	W0601 10:32:05.507060    9440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:05.507141    9440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:05.507141    9440 start.go:134] duration metric: createHost completed in 12.8165428s
	I0601 10:32:05.516565    9440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:32:05.522706    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:06.580706    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:06.580706    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0579879s)
	I0601 10:32:06.580706    9440 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:06.930716    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:07.949429    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:07.949429    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0187017s)
	W0601 10:32:07.950264    9440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:07.950264    9440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:07.961316    9440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:32:07.967277    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:09.018471    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:09.018471    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0511821s)
	I0601 10:32:09.018471    9440 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:09.260878    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:10.318962    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:10.318962    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0578914s)
	W0601 10:32:10.319118    9440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:10.319189    9440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:10.319189    9440 fix.go:57] fixHost completed within 48.7375929s
	I0601 10:32:10.319297    9440 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 48.7378423s
	W0601 10:32:10.319500    9440 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	W0601 10:32:10.319784    9440 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	I0601 10:32:10.319852    9440 start.go:614] Will try again in 5 seconds ...
	I0601 10:32:15.328881    9440 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:32:15.329229    9440 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 163.9µs
	I0601 10:32:15.329490    9440 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:32:15.329561    9440 fix.go:55] fixHost starting: 
	I0601 10:32:15.343105    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:16.329496    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:16.329496    9440 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:16.329496    9440 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:32:16.333503    9440 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
	I0601 10:32:16.336139    9440 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
	I0601 10:32:16.355181    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:17.399677    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:17.399750    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0443138s)
	W0601 10:32:17.399811    9440 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:17.399811    9440 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:17.407877    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:18.436183    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:18.436252    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0280964s)
	I0601 10:32:18.436252    9440 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:18.443663    9440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:32:19.457829    9440 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:19.458039    9440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0140322s)
	I0601 10:32:19.458099    9440 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
	I0601 10:32:19.465320    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:20.499554    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:20.499735    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0342225s)
	W0601 10:32:20.499798    9440 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:20.506379    9440 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
	W0601 10:32:21.533121    9440 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:32:21.533192    9440 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0265926s)
	I0601 10:32:21.533315    9440 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:22.549373    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:23.582687    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:23.582747    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0330748s)
	I0601 10:32:23.582776    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:23.582844    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:23.582936    9440 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:24.091656    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:25.130119    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:25.130119    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0383199s)
	I0601 10:32:25.130119    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:25.130119    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:25.130119    9440 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:25.734983    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:26.742333    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:26.742565    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0071436s)
	I0601 10:32:26.742565    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:26.742565    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:26.742565    9440 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:27.647090    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:28.674864    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:28.674864    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0277625s)
	I0601 10:32:28.674864    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:28.674864    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:28.674864    9440 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:30.685126    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:31.703474    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:31.703598    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0181982s)
	I0601 10:32:31.703673    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:31.703673    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:31.703673    9440 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:33.550137    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:34.559095    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:34.559095    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.008947s)
	I0601 10:32:34.559095    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:34.559095    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:34.559095    9440 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:37.250927    9440 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:32:38.266144    9440 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:32:38.266220    9440 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.015019s)
	I0601 10:32:38.266220    9440 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:38.266220    9440 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:32:38.266220    9440 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	 
	I0601 10:32:38.274230    9440 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
	I0601 10:32:39.310020    9440 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.0355871s)
	I0601 10:32:39.316849    9440 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:32:40.325916    9440 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:40.325916    9440 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0087276s)
	I0601 10:32:40.333161    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:32:41.365514    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:32:41.365676    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0321612s)
	I0601 10:32:41.373023    9440 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:32:41.373023    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:32:42.400432    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:42.400432    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0273975s)
	I0601 10:32:42.400432    9440 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:32:42.400432    9440 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	W0601 10:32:42.401463    9440 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:32:42.401463    9440 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:32:43.401882    9440 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:32:43.408876    9440 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:32:43.408876    9440 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
	I0601 10:32:43.409471    9440 client.go:168] LocalClient.Create starting
	I0601 10:32:43.409648    9440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:32:43.410414    9440 main.go:134] libmachine: Decoding PEM data...
	I0601 10:32:43.410442    9440 main.go:134] libmachine: Parsing certificate...
	I0601 10:32:43.410442    9440 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:32:43.410442    9440 main.go:134] libmachine: Decoding PEM data...
	I0601 10:32:43.410442    9440 main.go:134] libmachine: Parsing certificate...
	I0601 10:32:43.418956    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:32:44.458715    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:32:44.458715    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.039505s)
	I0601 10:32:44.465882    9440 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:32:44.465882    9440 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:32:45.486150    9440 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:45.486150    9440 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0200827s)
	I0601 10:32:45.486223    9440 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:32:45.486223    9440 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	I0601 10:32:45.494712    9440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:32:46.499072    9440 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0043494s)
	I0601 10:32:46.514517    9440 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060a110] amended:false}} dirty:map[] misses:0}
	I0601 10:32:46.514517    9440 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:32:46.529637    9440 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060a110] amended:true}} dirty:map[192.168.49.0:0xc00060a110 192.168.58.0:0xc000ca4228] misses:0}
	I0601 10:32:46.529637    9440 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:32:46.529637    9440 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 10:32:46.535789    9440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
	W0601 10:32:47.546227    9440 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:47.546339    9440 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0104271s)
	E0601 10:32:47.546426    9440 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b5475ef867fbcfeb47dd4128b08bfb01fde465836bf3f20af041f5cccad60cef (br-b5475ef867fb): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 10:32:47.546669    9440 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b5475ef867fbcfeb47dd4128b08bfb01fde465836bf3f20af041f5cccad60cef (br-b5475ef867fb): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b5475ef867fbcfeb47dd4128b08bfb01fde465836bf3f20af041f5cccad60cef (br-b5475ef867fb): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 10:32:47.559304    9440 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:32:48.591389    9440 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0319137s)
	I0601 10:32:48.598829    9440 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:32:49.611746    9440 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:32:49.611746    9440 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0129062s)
	I0601 10:32:49.611746    9440 client.go:171] LocalClient.Create took 6.2022066s
	I0601 10:32:51.623333    9440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:32:51.631860    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:52.641079    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:52.641079    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.009207s)
	I0601 10:32:52.641079    9440 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:52.930283    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:53.941567    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:53.941567    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0110814s)
	W0601 10:32:53.941567    9440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:53.941567    9440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:53.951385    9440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:32:53.957059    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:54.983923    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:54.983923    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0268532s)
	I0601 10:32:54.983923    9440 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:55.191455    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:56.245388    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:56.245492    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0537539s)
	W0601 10:32:56.245492    9440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:56.245492    9440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:56.245492    9440 start.go:134] duration metric: createHost completed in 12.8434669s
	I0601 10:32:56.259763    9440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:32:56.265824    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:57.285375    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:57.285474    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0193691s)
	I0601 10:32:57.285753    9440 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:57.611435    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:58.679017    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:58.679048    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0674326s)
	W0601 10:32:58.679244    9440 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:32:58.679291    9440 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:32:58.689347    9440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:32:58.694923    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:32:59.753417    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:32:59.753579    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0584051s)
	I0601 10:32:59.753710    9440 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:33:00.108395    9440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:33:01.114628    9440 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:33:01.114671    9440 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0059957s)
	W0601 10:33:01.115110    9440 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:33:01.115148    9440 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:33:01.115198    9440 fix.go:57] fixHost completed within 45.7851281s
	I0601 10:33:01.115272    9440 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 45.7854313s
	W0601 10:33:01.115934    9440 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	I0601 10:33:01.121378    9440 out.go:177] 
	W0601 10:33:01.123547    9440 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	W0601 10:33:01.123547    9440 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 10:33:01.123547    9440 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 10:33:01.126552    9440 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:653: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --alsologtostderr -v=8": exit status 60
functional_test.go:655: soft start took 1m50.0133909s for "functional-20220601102952-9404" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1721693s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.8409213s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:33:05.339444    9920 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (114.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
functional_test.go:673: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (340.4787ms)

                                                
                                                
** stderr ** 
	W0601 10:33:05.645520    7484 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:675: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:679: expected current-context = "functional-20220601102952-9404", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1020411s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.7898435s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:33:09.585467    5412 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (4.25s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601102952-9404 get po -A
functional_test.go:688: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 get po -A: exit status 1 (292.9236ms)

                                                
                                                
** stderr ** 
	W0601 10:33:09.833763    8016 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:690: failed to get kubectl pods: args "kubectl --context functional-20220601102952-9404 get po -A" : exit status 1
functional_test.go:694: expected stderr to be empty but got *"W0601 10:33:09.833763    8016 loader.go:223] Config not found: C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig\nError in configuration: \n* context was not found for specified context: functional-20220601102952-9404\n* cluster has no server defined\n"*: args "kubectl --context functional-20220601102952-9404 get po -A"
functional_test.go:697: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20220601102952-9404 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.0846958s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.7720079s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:33:13.750521    9976 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (4.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl images
functional_test.go:1116: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl images: exit status 80 (3.1086575s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1118: failed to get images by "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl images" ssh exit status 80
functional_test.go:1122: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (12.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (3.1360488s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_695159ccd5e0da3f5d811f2823eb9163b9dc65a6_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1142: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0545076s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache reload: (2.9614637s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (3.0842675s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_2.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1157: expected "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (12.24s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (5.9s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 kubectl -- --context functional-20220601102952-9404 get pods
functional_test.go:708: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 kubectl -- --context functional-20220601102952-9404 get pods: exit status 1 (1.9980511s)

                                                
                                                
** stderr ** 
	W0601 10:33:48.123103    6252 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* no server found for cluster "functional-20220601102952-9404"

                                                
                                                
** /stderr **
functional_test.go:711: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 kubectl -- --context functional-20220601102952-9404 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.0667155s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.8306947s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:33:52.095129    6844 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (5.90s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.88s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220601102952-9404 get pods
functional_test.go:733: (dbg) Non-zero exit: out\kubectl.exe --context functional-20220601102952-9404 get pods: exit status 1 (1.960827s)

                                                
                                                
** stderr ** 
	W0601 10:33:53.973432    8240 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* no server found for cluster "functional-20220601102952-9404"

                                                
                                                
** /stderr **
functional_test.go:736: failed to run kubectl directly. args "out\\kubectl.exe --context functional-20220601102952-9404 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1028315s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.8051384s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:33:57.979323    9536 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.88s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (113.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 60 (1m49.8522738s)

                                                
                                                
-- stdout --
	* [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
	* Pulling base image ...
	* docker "functional-20220601102952-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220601102952-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:34:43.514877    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	E0601 10:35:34.245475    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:751: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 60
functional_test.go:753: restart took 1m49.8527874s for "functional-20220601102952-9404" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1138914s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.8506029s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:35:51.808139    3624 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (113.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601102952-9404 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:802: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (296.4579ms)

                                                
                                                
** stderr ** 
	W0601 10:35:52.060616    9280 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220601102952-9404" does not exist

                                                
                                                
** /stderr **
functional_test.go:804: failed to get components. args "kubectl --context functional-20220601102952-9404 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1055779s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.739383s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:35:55.964949    9928 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (4.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs
functional_test.go:1228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs: exit status 80 (3.13213s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | --all                               | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:23 GMT |
	| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:24 GMT |
	|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
	|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
	| delete  | -p                                  | download-docker-20220601102408-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
	|         | download-docker-20220601102408-9404 |                                     |                   |                |                     |                     |
	| delete  | -p                                  | binary-mirror-20220601102453-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:25 GMT | 01 Jun 22 10:25 GMT |
	|         | binary-mirror-20220601102453-9404   |                                     |                   |                |                     |                     |
	| delete  | -p addons-20220601102510-9404       | addons-20220601102510-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:26 GMT | 01 Jun 22 10:26 GMT |
	| delete  | -p nospam-20220601102633-9404       | nospam-20220601102633-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:29 GMT | 01 Jun 22 10:29 GMT |
	| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
	| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
	| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add                           |                                     |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache reload                        |                                     |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:33:58
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:33:58.233859    7808 out.go:296] Setting OutFile to fd 664 ...
	I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:33:58.298266    7808 out.go:309] Setting ErrFile to fd 620...
	I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:33:58.313272    7808 out.go:303] Setting JSON to false
	I0601 10:33:58.315269    7808 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11573,"bootTime":1654068065,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:33:58.316320    7808 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:33:58.322761    7808 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:33:58.325809    7808 notify.go:193] Checking for updates...
	I0601 10:33:58.329825    7808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:33:58.332847    7808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:33:58.339053    7808 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:33:58.344286    7808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:33:58.347973    7808 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:33:58.348961    7808 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:34:00.889629    7808 docker.go:137] docker version: linux-20.10.14
	I0601 10:34:00.896720    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:34:02.975950    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0792066s)
	I0601 10:34:02.976895    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:01.8998586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:34:02.981807    7808 out.go:177] * Using the docker driver based on existing profile
	I0601 10:34:02.984064    7808 start.go:284] selected driver: docker
	I0601 10:34:02.984064    7808 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:34:02.984154    7808 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:34:03.003705    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:34:05.058843    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0551149s)
	I0601 10:34:05.058843    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:04.0296243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:34:05.106319    7808 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 10:34:05.106319    7808 cni.go:95] Creating CNI manager for ""
	I0601 10:34:05.106319    7808 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:34:05.106319    7808 start_flags.go:306] config:
	{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:34:05.112874    7808 out.go:177] * Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
	I0601 10:34:05.114761    7808 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:34:05.118876    7808 out.go:177] * Pulling base image ...
	I0601 10:34:05.121427    7808 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 10:34:05.121427    7808 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:34:05.121427    7808 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 10:34:05.121427    7808 cache.go:57] Caching tarball of preloaded images
	I0601 10:34:05.121427    7808 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 10:34:05.121427    7808 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 10:34:05.122364    7808 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220601102952-9404\config.json ...
	I0601 10:34:06.167838    7808 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:34:06.167941    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:34:06.168265    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:34:06.168307    7808 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:34:06.168508    7808 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:34:06.168508    7808 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:34:06.168798    7808 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 10:34:06.168798    7808 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 10:34:06.168884    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:34:08.436323    7808 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 10:34:08.436393    7808 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:34:08.436564    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:34:08.436703    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 138.9µs
	I0601 10:34:08.436959    7808 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:34:08.437043    7808 fix.go:55] fixHost starting: 
	I0601 10:34:08.451473    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:09.459801    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:09.459801    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0081256s)
	I0601 10:34:09.460027    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:09.460087    7808 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:34:09.470253    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
	I0601 10:34:09.473028    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
	I0601 10:34:09.486161    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:10.500995    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:10.501173    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0145883s)
	W0601 10:34:10.501243    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:10.501243    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:10.515532    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:11.541677    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:11.541677    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0261337s)
	I0601 10:34:11.541677    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:11.549276    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:34:12.554073    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:12.554073    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0047857s)
	I0601 10:34:12.554073    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
	I0601 10:34:12.560830    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:13.603034    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:13.603034    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0421926s)
	W0601 10:34:13.603034    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:13.610650    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
	W0601 10:34:14.634458    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:34:14.634492    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0236372s)
	I0601 10:34:14.634569    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:15.648361    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:16.671852    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:16.671852    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0233507s)
	I0601 10:34:16.671909    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:16.671962    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:16.671991    7808 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:17.238467    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:18.276363    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:18.276363    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0376512s)
	I0601 10:34:18.276363    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:18.276363    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:18.276363    7808 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:19.373633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:20.362653    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:20.362653    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:20.362653    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:20.362653    7808 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:21.693757    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:22.716124    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:22.716344    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0223555s)
	I0601 10:34:22.716419    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:22.716446    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:22.716446    7808 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:24.317633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:25.337128    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:25.337163    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0193017s)
	I0601 10:34:25.337322    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:25.337322    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:25.337397    7808 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:27.686870    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:28.697307    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:28.697307    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0104261s)
	I0601 10:34:28.697307    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:28.697307    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:28.697307    7808 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:33.217787    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:34:34.208806    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:34:34.208806    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:34.208806    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:34:34.208806    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	 
	I0601 10:34:34.216041    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
	I0601 10:34:35.234076    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.018023s)
	I0601 10:34:35.242025    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:34:36.281874    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:36.281874    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0397065s)
	I0601 10:34:36.289630    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:34:37.343455    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:34:37.343455    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0536643s)
	I0601 10:34:37.351699    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:34:37.351699    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:34:38.367605    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:38.367605    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0157441s)
	I0601 10:34:38.367605    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:34:38.367605    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	W0601 10:34:38.368275    7808 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:34:38.368275    7808 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:34:39.369250    7808 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:34:39.373571    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:34:39.374011    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
	I0601 10:34:39.374011    7808 client.go:168] LocalClient.Create starting
	I0601 10:34:39.374581    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:34:39.374944    7808 main.go:134] libmachine: Decoding PEM data...
	I0601 10:34:39.374944    7808 main.go:134] libmachine: Parsing certificate...
	I0601 10:34:39.375265    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:34:39.375378    7808 main.go:134] libmachine: Decoding PEM data...
	I0601 10:34:39.375493    7808 main.go:134] libmachine: Parsing certificate...
	I0601 10:34:39.383991    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:34:40.397320    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:34:40.397505    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0133179s)
	I0601 10:34:40.405503    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:34:40.405503    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:34:41.407359    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:41.407385    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0017987s)
	I0601 10:34:41.407385    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:34:41.407427    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	I0601 10:34:41.414960    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:34:42.440609    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.025638s)
	I0601 10:34:42.458460    7808 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006a20] misses:0}
	I0601 10:34:42.458460    7808 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:34:42.458460    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 10:34:42.466241    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
	W0601 10:34:43.514717    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:43.514717    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0482283s)
	E0601 10:34:43.514877    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 10:34:43.515077    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 10:34:43.528389    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:34:44.559674    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0311157s)
	I0601 10:34:44.567012    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:34:45.626524    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:34:45.626773    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0594997s)
	I0601 10:34:45.626911    7808 client.go:171] LocalClient.Create took 6.25283s
	I0601 10:34:47.640568    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:34:47.646562    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:48.653290    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:48.653290    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0064272s)
	I0601 10:34:48.653442    7808 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:48.834436    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:49.863840    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:49.863840    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0291974s)
	W0601 10:34:49.863878    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:34:49.863878    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:49.873617    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:34:49.879584    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:50.894916    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:50.894916    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0153213s)
	I0601 10:34:50.894916    7808 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:51.106868    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:52.105249    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	W0601 10:34:52.105249    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:34:52.105249    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:52.105249    7808 start.go:134] duration metric: createHost completed in 12.7358571s
	I0601 10:34:52.114256    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:34:52.119242    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:53.135480    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:53.135506    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0161596s)
	I0601 10:34:53.135506    7808 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:53.471929    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:54.491841    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:54.491978    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0199001s)
	W0601 10:34:54.492269    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:34:54.492269    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:54.502421    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:34:54.508470    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:55.532346    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:55.532503    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0237274s)
	I0601 10:34:55.532642    7808 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:55.773648    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:34:56.790698    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:34:56.790698    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.017038s)
	W0601 10:34:56.790698    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:34:56.790698    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:34:56.790698    7808 fix.go:57] fixHost completed within 48.3531991s
	I0601 10:34:56.790698    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 48.3533716s
	W0601 10:34:56.791236    7808 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	W0601 10:34:56.791650    7808 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	I0601 10:34:56.791650    7808 start.go:614] Will try again in 5 seconds ...
	I0601 10:35:01.794641    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:35:01.794641    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 0s
	I0601 10:35:01.795241    7808 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:35:01.795241    7808 fix.go:55] fixHost starting: 
	I0601 10:35:01.808965    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:02.855630    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:02.855675    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.046522s)
	I0601 10:35:02.855830    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:02.855852    7808 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:35:02.859654    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
	I0601 10:35:02.873703    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
	I0601 10:35:02.886864    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:03.900015    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:03.900015    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0131393s)
	W0601 10:35:03.900015    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:03.900015    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:03.913414    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:04.985562    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:04.985562    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0719939s)
	I0601 10:35:04.985759    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:04.993499    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:35:06.017445    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:06.017445    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0239353s)
	I0601 10:35:06.017445    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
	I0601 10:35:06.025004    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:07.032496    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:07.032496    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0074808s)
	W0601 10:35:07.032496    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:07.039836    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
	W0601 10:35:08.054132    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:35:08.054132    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0142843s)
	I0601 10:35:08.054132    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:09.077289    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:10.092714    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:10.092748    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0151875s)
	I0601 10:35:10.092748    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:10.092748    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:10.092748    7808 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:10.586899    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:11.615443    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:11.615443    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0285323s)
	I0601 10:35:11.615443    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:11.615443    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:11.615443    7808 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:12.225197    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:13.249992    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:13.250108    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0246992s)
	I0601 10:35:13.250180    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:13.250180    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:13.250248    7808 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:14.152944    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:15.200613    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:15.200810    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0476571s)
	I0601 10:35:15.200885    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:15.200885    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:15.200885    7808 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:17.211889    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:18.250481    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:18.250481    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.038581s)
	I0601 10:35:18.250481    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:18.250481    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:18.250481    7808 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:20.080903    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:21.088683    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:21.088683    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0077695s)
	I0601 10:35:21.088900    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:21.088900    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:21.088929    7808 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:23.779492    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:35:24.852209    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:35:24.852209    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0727047s)
	I0601 10:35:24.852328    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:24.852328    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
	I0601 10:35:24.852422    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	 
	I0601 10:35:24.859843    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
	I0601 10:35:25.885589    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.0256621s)
	I0601 10:35:25.892104    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
	W0601 10:35:26.929997    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:26.929997    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.037881s)
	I0601 10:35:26.937919    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:35:27.992129    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:35:27.992129    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0540189s)
	I0601 10:35:27.999625    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:35:27.999625    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:35:29.031726    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:29.031726    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0320891s)
	I0601 10:35:29.031726    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:35:29.031726    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	W0601 10:35:29.031726    7808 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:35:29.031726    7808 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:35:30.043662    7808 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:35:30.047377    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0601 10:35:30.048207    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
	I0601 10:35:30.048207    7808 client.go:168] LocalClient.Create starting
	I0601 10:35:30.049295    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:35:30.049295    7808 main.go:134] libmachine: Decoding PEM data...
	I0601 10:35:30.049295    7808 main.go:134] libmachine: Parsing certificate...
	I0601 10:35:30.049824    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:35:30.049935    7808 main.go:134] libmachine: Decoding PEM data...
	I0601 10:35:30.050027    7808 main.go:134] libmachine: Parsing certificate...
	I0601 10:35:30.058751    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:35:31.060454    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:35:31.060454    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0015144s)
	I0601 10:35:31.069123    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
	I0601 10:35:31.069123    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
	W0601 10:35:32.124043    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:32.124043    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0549091s)
	I0601 10:35:32.124043    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220601102952-9404
	I0601 10:35:32.124043    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220601102952-9404
	
	** /stderr **
	I0601 10:35:32.132197    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:35:33.168524    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.036316s)
	I0601 10:35:33.185998    7808 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:false}} dirty:map[] misses:0}
	I0601 10:35:33.185998    7808 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:35:33.201005    7808 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:true}} dirty:map[192.168.49.0:0xc000006a20 192.168.58.0:0xc0008a8900] misses:0}
	I0601 10:35:33.201005    7808 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:35:33.201005    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 10:35:33.207804    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
	W0601 10:35:34.245342    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:34.245475    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0375266s)
	E0601 10:35:34.245475    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 10:35:34.245825    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 10:35:34.259541    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:35:35.279439    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0198869s)
	I0601 10:35:35.286316    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:35:36.339862    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:35:36.340098    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0535347s)
	I0601 10:35:36.340098    7808 client.go:171] LocalClient.Create took 6.2918212s
	I0601 10:35:38.354197    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:35:38.360195    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:39.397940    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:39.397940    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0377337s)
	I0601 10:35:39.398203    7808 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:39.684417    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:40.702458    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:40.702458    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0180291s)
	W0601 10:35:40.702458    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:35:40.702458    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:40.712807    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:35:40.717777    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:41.731036    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:41.731036    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0132475s)
	I0601 10:35:41.731036    7808 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:41.941378    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:42.964908    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:42.964958    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0234536s)
	W0601 10:35:42.965267    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:35:42.965306    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:42.965306    7808 start.go:134] duration metric: createHost completed in 12.9215001s
	I0601 10:35:42.978130    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:35:42.985120    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:44.021610    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:44.021610    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0364783s)
	I0601 10:35:44.021610    7808 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:44.357752    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:45.365338    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:45.365459    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0074332s)
	W0601 10:35:45.365459    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:35:45.365459    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:45.374137    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:35:45.380143    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:46.425446    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:46.425446    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0452913s)
	I0601 10:35:46.425446    7808 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:46.781763    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
	W0601 10:35:47.786123    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
	I0601 10:35:47.786123    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0043485s)
	W0601 10:35:47.786123    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:35:47.786123    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	I0601 10:35:47.786123    7808 fix.go:57] fixHost completed within 45.9903711s
	I0601 10:35:47.786123    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 45.9909708s
	W0601 10:35:47.786997    7808 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	I0601 10:35:47.810496    7808 out.go:177] 
	W0601 10:35:47.813946    7808 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
	
	W0601 10:35:47.813946    7808 out.go:239] * Suggestion: Restart Docker
	W0601 10:35:47.814464    7808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 10:35:47.819128    7808 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_745.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1230: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs failed: exit status 80
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:23 GMT |
| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:24 GMT |
|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220601102408-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
|         | download-docker-20220601102408-9404 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220601102453-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:25 GMT | 01 Jun 22 10:25 GMT |
|         | binary-mirror-20220601102453-9404   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220601102510-9404       | addons-20220601102510-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:26 GMT | 01 Jun 22 10:26 GMT |
| delete  | -p nospam-20220601102633-9404       | nospam-20220601102633-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:29 GMT | 01 Jun 22 10:29 GMT |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/06/01 10:33:58
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0601 10:33:58.233859    7808 out.go:296] Setting OutFile to fd 664 ...
I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 10:33:58.298266    7808 out.go:309] Setting ErrFile to fd 620...
I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 10:33:58.313272    7808 out.go:303] Setting JSON to false
I0601 10:33:58.315269    7808 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11573,"bootTime":1654068065,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0601 10:33:58.316320    7808 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0601 10:33:58.322761    7808 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0601 10:33:58.325809    7808 notify.go:193] Checking for updates...
I0601 10:33:58.329825    7808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0601 10:33:58.332847    7808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0601 10:33:58.339053    7808 out.go:177]   - MINIKUBE_LOCATION=14079
I0601 10:33:58.344286    7808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0601 10:33:58.347973    7808 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0601 10:33:58.348961    7808 driver.go:358] Setting default libvirt URI to qemu:///system
I0601 10:34:00.889629    7808 docker.go:137] docker version: linux-20.10.14
I0601 10:34:00.896720    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 10:34:02.975950    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0792066s)
I0601 10:34:02.976895    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:01.8998586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 10:34:02.981807    7808 out.go:177] * Using the docker driver based on existing profile
I0601 10:34:02.984064    7808 start.go:284] selected driver: docker
I0601 10:34:02.984064    7808 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 10:34:02.984154    7808 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0601 10:34:03.003705    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 10:34:05.058843    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0551149s)
I0601 10:34:05.058843    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:04.0296243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 10:34:05.106319    7808 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0601 10:34:05.106319    7808 cni.go:95] Creating CNI manager for ""
I0601 10:34:05.106319    7808 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0601 10:34:05.106319    7808 start_flags.go:306] config:
{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 10:34:05.112874    7808 out.go:177] * Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
I0601 10:34:05.114761    7808 cache.go:120] Beginning downloading kic base image for docker with docker
I0601 10:34:05.118876    7808 out.go:177] * Pulling base image ...
I0601 10:34:05.121427    7808 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0601 10:34:05.121427    7808 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
I0601 10:34:05.121427    7808 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0601 10:34:05.121427    7808 cache.go:57] Caching tarball of preloaded images
I0601 10:34:05.121427    7808 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0601 10:34:05.121427    7808 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0601 10:34:05.122364    7808 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220601102952-9404\config.json ...
I0601 10:34:06.167838    7808 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
I0601 10:34:06.167941    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:06.168265    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:06.168307    7808 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
I0601 10:34:06.168508    7808 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
I0601 10:34:06.168508    7808 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
I0601 10:34:06.168798    7808 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
I0601 10:34:06.168798    7808 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
I0601 10:34:06.168884    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:08.436323    7808 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
I0601 10:34:08.436393    7808 cache.go:206] Successfully downloaded all kic artifacts
I0601 10:34:08.436564    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0601 10:34:08.436703    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 138.9µs
I0601 10:34:08.436959    7808 start.go:94] Skipping create...Using existing machine configuration
I0601 10:34:08.437043    7808 fix.go:55] fixHost starting: 
I0601 10:34:08.451473    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:09.459801    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:09.459801    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0081256s)
I0601 10:34:09.460027    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:09.460087    7808 fix.go:108] machineExists: false. err=machine does not exist
I0601 10:34:09.470253    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
I0601 10:34:09.473028    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
I0601 10:34:09.486161    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:10.500995    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:10.501173    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0145883s)
W0601 10:34:10.501243    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:10.501243    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:10.515532    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:11.541677    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:11.541677    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0261337s)
I0601 10:34:11.541677    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:11.549276    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:34:12.554073    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:34:12.554073    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0047857s)
I0601 10:34:12.554073    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
I0601 10:34:12.560830    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:13.603034    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:13.603034    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0421926s)
W0601 10:34:13.603034    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:13.610650    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
W0601 10:34:14.634458    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
I0601 10:34:14.634492    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0236372s)
I0601 10:34:14.634569    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:15.648361    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:16.671852    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:16.671852    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0233507s)
I0601 10:34:16.671909    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:16.671962    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:16.671991    7808 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:17.238467    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:18.276363    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:18.276363    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0376512s)
I0601 10:34:18.276363    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:18.276363    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:18.276363    7808 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:19.373633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:20.362653    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:20.362653    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:20.362653    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:20.362653    7808 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:21.693757    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:22.716124    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:22.716344    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0223555s)
I0601 10:34:22.716419    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:22.716446    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:22.716446    7808 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:24.317633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:25.337128    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:25.337163    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0193017s)
I0601 10:34:25.337322    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:25.337322    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:25.337397    7808 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:27.686870    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:28.697307    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:28.697307    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0104261s)
I0601 10:34:28.697307    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:28.697307    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:28.697307    7808 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:33.217787    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:34.208806    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:34.208806    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:34.208806    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:34.208806    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
I0601 10:34:34.216041    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
I0601 10:34:35.234076    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.018023s)
I0601 10:34:35.242025    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:34:36.281874    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:34:36.281874    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0397065s)
I0601 10:34:36.289630    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:34:37.343455    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:34:37.343455    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0536643s)
I0601 10:34:37.351699    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:34:37.351699    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:34:38.367605    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:34:38.367605    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0157441s)
I0601 10:34:38.367605    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:34:38.367605    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
W0601 10:34:38.368275    7808 delete.go:139] delete failed (probably ok) <nil>
I0601 10:34:38.368275    7808 fix.go:115] Sleeping 1 second for extra luck!
I0601 10:34:39.369250    7808 start.go:131] createHost starting for "" (driver="docker")
I0601 10:34:39.373571    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0601 10:34:39.374011    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
I0601 10:34:39.374011    7808 client.go:168] LocalClient.Create starting
I0601 10:34:39.374581    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0601 10:34:39.374944    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:34:39.374944    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:34:39.375265    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0601 10:34:39.375378    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:34:39.375493    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:34:39.383991    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:34:40.397320    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:34:40.397505    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0133179s)
I0601 10:34:40.405503    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:34:40.405503    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:34:41.407359    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:34:41.407385    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0017987s)
I0601 10:34:41.407385    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:34:41.407427    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
I0601 10:34:41.414960    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0601 10:34:42.440609    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.025638s)
I0601 10:34:42.458460    7808 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006a20] misses:0}
I0601 10:34:42.458460    7808 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:34:42.458460    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0601 10:34:42.466241    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
W0601 10:34:43.514717    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
I0601 10:34:43.514717    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0482283s)
E0601 10:34:43.514877    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
W0601 10:34:43.515077    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4

                                                
                                                
I0601 10:34:43.528389    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0601 10:34:44.559674    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0311157s)
I0601 10:34:44.567012    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
W0601 10:34:45.626524    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0601 10:34:45.626773    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0594997s)
I0601 10:34:45.626911    7808 client.go:171] LocalClient.Create took 6.25283s
I0601 10:34:47.640568    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:34:47.646562    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:48.653290    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:48.653290    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0064272s)
I0601 10:34:48.653442    7808 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:48.834436    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:49.863840    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:49.863840    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0291974s)
W0601 10:34:49.863878    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:49.863878    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:49.873617    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:34:49.879584    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:50.894916    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:50.894916    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0153213s)
I0601 10:34:50.894916    7808 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:51.106868    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:52.105249    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
W0601 10:34:52.105249    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:52.105249    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:52.105249    7808 start.go:134] duration metric: createHost completed in 12.7358571s
I0601 10:34:52.114256    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:34:52.119242    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:53.135480    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:53.135506    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0161596s)
I0601 10:34:53.135506    7808 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:53.471929    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:54.491841    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:54.491978    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0199001s)
W0601 10:34:54.492269    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:54.492269    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:54.502421    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:34:54.508470    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:55.532346    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:55.532503    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0237274s)
I0601 10:34:55.532642    7808 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:55.773648    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:56.790698    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:56.790698    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.017038s)
W0601 10:34:56.790698    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:56.790698    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:56.790698    7808 fix.go:57] fixHost completed within 48.3531991s
I0601 10:34:56.790698    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 48.3533716s
W0601 10:34:56.791236    7808 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
W0601 10:34:56.791650    7808 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
I0601 10:34:56.791650    7808 start.go:614] Will try again in 5 seconds ...
I0601 10:35:01.794641    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0601 10:35:01.794641    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 0s
I0601 10:35:01.795241    7808 start.go:94] Skipping create...Using existing machine configuration
I0601 10:35:01.795241    7808 fix.go:55] fixHost starting: 
I0601 10:35:01.808965    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:02.855630    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:02.855675    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.046522s)
I0601 10:35:02.855830    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:02.855852    7808 fix.go:108] machineExists: false. err=machine does not exist
I0601 10:35:02.859654    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
I0601 10:35:02.873703    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
I0601 10:35:02.886864    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:03.900015    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:03.900015    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0131393s)
W0601 10:35:03.900015    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:03.900015    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:03.913414    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:04.985562    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:04.985562    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0719939s)
I0601 10:35:04.985759    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:04.993499    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:35:06.017445    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:35:06.017445    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0239353s)
I0601 10:35:06.017445    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
I0601 10:35:06.025004    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:07.032496    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:07.032496    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0074808s)
W0601 10:35:07.032496    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:07.039836    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
W0601 10:35:08.054132    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
I0601 10:35:08.054132    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0142843s)
I0601 10:35:08.054132    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:09.077289    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:10.092714    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:10.092748    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0151875s)
I0601 10:35:10.092748    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:10.092748    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:10.092748    7808 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:10.586899    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:11.615443    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:11.615443    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0285323s)
I0601 10:35:11.615443    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:11.615443    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:11.615443    7808 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:12.225197    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:13.249992    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:13.250108    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0246992s)
I0601 10:35:13.250180    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:13.250180    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:13.250248    7808 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:14.152944    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:15.200613    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:15.200810    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0476571s)
I0601 10:35:15.200885    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:15.200885    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:15.200885    7808 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:17.211889    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:18.250481    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:18.250481    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.038581s)
I0601 10:35:18.250481    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:18.250481    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:18.250481    7808 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:20.080903    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:21.088683    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:21.088683    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0077695s)
I0601 10:35:21.088900    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:21.088900    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:21.088929    7808 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:23.779492    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:24.852209    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:24.852209    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0727047s)
I0601 10:35:24.852328    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:24.852328    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:24.852422    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
I0601 10:35:24.859843    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
I0601 10:35:25.885589    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.0256621s)
I0601 10:35:25.892104    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:35:26.929997    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:35:26.929997    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.037881s)
I0601 10:35:26.937919    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:35:27.992129    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:35:27.992129    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0540189s)
I0601 10:35:27.999625    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:35:27.999625    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:35:29.031726    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:35:29.031726    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0320891s)
I0601 10:35:29.031726    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:35:29.031726    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
W0601 10:35:29.031726    7808 delete.go:139] delete failed (probably ok) <nil>
I0601 10:35:29.031726    7808 fix.go:115] Sleeping 1 second for extra luck!
I0601 10:35:30.043662    7808 start.go:131] createHost starting for "" (driver="docker")
I0601 10:35:30.047377    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0601 10:35:30.048207    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
I0601 10:35:30.048207    7808 client.go:168] LocalClient.Create starting
I0601 10:35:30.049295    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0601 10:35:30.049295    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:35:30.049295    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:35:30.049824    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0601 10:35:30.049935    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:35:30.050027    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:35:30.058751    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:35:31.060454    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:35:31.060454    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0015144s)
I0601 10:35:31.069123    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:35:31.069123    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:35:32.124043    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:35:32.124043    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0549091s)
I0601 10:35:32.124043    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:35:32.124043    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
I0601 10:35:32.132197    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0601 10:35:33.168524    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.036316s)
I0601 10:35:33.185998    7808 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:false}} dirty:map[] misses:0}
I0601 10:35:33.185998    7808 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:35:33.201005    7808 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:true}} dirty:map[192.168.49.0:0xc000006a20 192.168.58.0:0xc0008a8900] misses:0}
I0601 10:35:33.201005    7808 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:35:33.201005    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0601 10:35:33.207804    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
W0601 10:35:34.245342    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
I0601 10:35:34.245475    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0375266s)
E0601 10:35:34.245475    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
W0601 10:35:34.245825    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4

                                                
                                                
I0601 10:35:34.259541    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0601 10:35:35.279439    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0198869s)
I0601 10:35:35.286316    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
W0601 10:35:36.339862    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0601 10:35:36.340098    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0535347s)
I0601 10:35:36.340098    7808 client.go:171] LocalClient.Create took 6.2918212s
I0601 10:35:38.354197    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:35:38.360195    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:39.397940    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:39.397940    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0377337s)
I0601 10:35:39.398203    7808 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:39.684417    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:40.702458    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:40.702458    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0180291s)
W0601 10:35:40.702458    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:40.702458    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:40.712807    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:35:40.717777    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:41.731036    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:41.731036    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0132475s)
I0601 10:35:41.731036    7808 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:41.941378    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:42.964908    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:42.964958    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0234536s)
W0601 10:35:42.965267    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:42.965306    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:42.965306    7808 start.go:134] duration metric: createHost completed in 12.9215001s
I0601 10:35:42.978130    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:35:42.985120    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:44.021610    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:44.021610    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0364783s)
I0601 10:35:44.021610    7808 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:44.357752    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:45.365338    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:45.365459    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0074332s)
W0601 10:35:45.365459    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:45.365459    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:45.374137    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:35:45.380143    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:46.425446    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:46.425446    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0452913s)
I0601 10:35:46.425446    7808 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:46.781763    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:47.786123    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:47.786123    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0043485s)
W0601 10:35:47.786123    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:47.786123    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:47.786123    7808 fix.go:57] fixHost completed within 45.9903711s
I0601 10:35:47.786123    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 45.9909708s
W0601 10:35:47.786997    7808 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
I0601 10:35:47.810496    7808 out.go:177] 
W0601 10:35:47.813946    7808 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
W0601 10:35:47.813946    7808 out.go:239] * Suggestion: Restart Docker
W0601 10:35:47.814464    7808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0601 10:35:47.819128    7808 out.go:177] 

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (3.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3886908138\001\logs.txt
functional_test.go:1242: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3886908138\001\logs.txt: exit status 80 (4.2430037s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_746.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1244: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3886908138\001\logs.txt failed: exit status 80
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_746.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| Command |                Args                 |               Profile               |       User        |    Version     |     Start Time      |      End Time       |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|
| delete  | --all                               | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:23 GMT |
| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:23 GMT | 01 Jun 22 10:24 GMT |
|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-only-20220601102309-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
|         | download-only-20220601102309-9404   |                                     |                   |                |                     |                     |
| delete  | -p                                  | download-docker-20220601102408-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
|         | download-docker-20220601102408-9404 |                                     |                   |                |                     |                     |
| delete  | -p                                  | binary-mirror-20220601102453-9404   | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:25 GMT | 01 Jun 22 10:25 GMT |
|         | binary-mirror-20220601102453-9404   |                                     |                   |                |                     |                     |
| delete  | -p addons-20220601102510-9404       | addons-20220601102510-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:26 GMT | 01 Jun 22 10:26 GMT |
| delete  | -p nospam-20220601102633-9404       | nospam-20220601102633-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:29 GMT | 01 Jun 22 10:29 GMT |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add k8s.gcr.io/pause:3.1      |                                     |                   |                |                     |                     |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add k8s.gcr.io/pause:3.3      |                                     |                   |                |                     |                     |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache add                           |                                     |                   |                |                     |                     |
|         | k8s.gcr.io/pause:latest             |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | list                                | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | functional-20220601102952-9404      | functional-20220601102952-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|         | cache reload                        |                                     |                   |                |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1         | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
| cache   | delete k8s.gcr.io/pause:latest      | minikube                            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
|---------|-------------------------------------|-------------------------------------|-------------------|----------------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/06/01 10:33:58
Running on machine: minikube2
Binary: Built with gc go1.18.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0601 10:33:58.233859    7808 out.go:296] Setting OutFile to fd 664 ...
I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 10:33:58.298266    7808 out.go:309] Setting ErrFile to fd 620...
I0601 10:33:58.298266    7808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 10:33:58.313272    7808 out.go:303] Setting JSON to false
I0601 10:33:58.315269    7808 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11573,"bootTime":1654068065,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0601 10:33:58.316320    7808 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0601 10:33:58.322761    7808 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0601 10:33:58.325809    7808 notify.go:193] Checking for updates...
I0601 10:33:58.329825    7808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0601 10:33:58.332847    7808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0601 10:33:58.339053    7808 out.go:177]   - MINIKUBE_LOCATION=14079
I0601 10:33:58.344286    7808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0601 10:33:58.347973    7808 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0601 10:33:58.348961    7808 driver.go:358] Setting default libvirt URI to qemu:///system
I0601 10:34:00.889629    7808 docker.go:137] docker version: linux-20.10.14
I0601 10:34:00.896720    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 10:34:02.975950    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0792066s)
I0601 10:34:02.976895    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:01.8998586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 10:34:02.981807    7808 out.go:177] * Using the docker driver based on existing profile
I0601 10:34:02.984064    7808 start.go:284] selected driver: docker
I0601 10:34:02.984064    7808 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 10:34:02.984154    7808 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0601 10:34:03.003705    7808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0601 10:34:05.058843    7808 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0551149s)
I0601 10:34:05.058843    7808 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:34:04.0296243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
I0601 10:34:05.106319    7808 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0601 10:34:05.106319    7808 cni.go:95] Creating CNI manager for ""
I0601 10:34:05.106319    7808 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0601 10:34:05.106319    7808 start_flags.go:306] config:
{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false}
I0601 10:34:05.112874    7808 out.go:177] * Starting control plane node functional-20220601102952-9404 in cluster functional-20220601102952-9404
I0601 10:34:05.114761    7808 cache.go:120] Beginning downloading kic base image for docker with docker
I0601 10:34:05.118876    7808 out.go:177] * Pulling base image ...
I0601 10:34:05.121427    7808 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0601 10:34:05.121427    7808 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
I0601 10:34:05.121427    7808 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0601 10:34:05.121427    7808 cache.go:57] Caching tarball of preloaded images
I0601 10:34:05.121427    7808 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0601 10:34:05.121427    7808 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
I0601 10:34:05.122364    7808 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220601102952-9404\config.json ...
I0601 10:34:06.167838    7808 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
I0601 10:34:06.167941    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:06.168265    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:06.168307    7808 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
I0601 10:34:06.168508    7808 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
I0601 10:34:06.168508    7808 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
I0601 10:34:06.168798    7808 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
I0601 10:34:06.168798    7808 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
I0601 10:34:06.168884    7808 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
I0601 10:34:08.436323    7808 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
I0601 10:34:08.436393    7808 cache.go:206] Successfully downloaded all kic artifacts
I0601 10:34:08.436564    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0601 10:34:08.436703    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 138.9µs
I0601 10:34:08.436959    7808 start.go:94] Skipping create...Using existing machine configuration
I0601 10:34:08.437043    7808 fix.go:55] fixHost starting: 
I0601 10:34:08.451473    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:09.459801    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:09.459801    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0081256s)
I0601 10:34:09.460027    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:09.460087    7808 fix.go:108] machineExists: false. err=machine does not exist
I0601 10:34:09.470253    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
I0601 10:34:09.473028    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
I0601 10:34:09.486161    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:10.500995    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:10.501173    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0145883s)
W0601 10:34:10.501243    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:10.501243    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:10.515532    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:11.541677    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:11.541677    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0261337s)
I0601 10:34:11.541677    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:11.549276    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:34:12.554073    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:34:12.554073    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0047857s)
I0601 10:34:12.554073    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
I0601 10:34:12.560830    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:13.603034    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:13.603034    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0421926s)
W0601 10:34:13.603034    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:13.610650    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
W0601 10:34:14.634458    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
I0601 10:34:14.634492    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0236372s)
I0601 10:34:14.634569    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:15.648361    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:16.671852    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:16.671852    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0233507s)
I0601 10:34:16.671909    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:16.671962    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:16.671991    7808 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:17.238467    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:18.276363    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:18.276363    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0376512s)
I0601 10:34:18.276363    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:18.276363    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:18.276363    7808 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:19.373633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:20.362653    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:20.362653    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:20.362653    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:20.362653    7808 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:21.693757    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:22.716124    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:22.716344    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0223555s)
I0601 10:34:22.716419    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:22.716446    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:22.716446    7808 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:24.317633    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:25.337128    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:25.337163    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0193017s)
I0601 10:34:25.337322    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:25.337322    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:25.337397    7808 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:27.686870    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:28.697307    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:28.697307    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0104261s)
I0601 10:34:28.697307    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:28.697307    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:28.697307    7808 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:33.217787    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:34:34.208806    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:34:34.208806    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:34.208806    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:34:34.208806    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
I0601 10:34:34.216041    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
I0601 10:34:35.234076    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.018023s)
I0601 10:34:35.242025    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:34:36.281874    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:34:36.281874    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0397065s)
I0601 10:34:36.289630    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:34:37.343455    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:34:37.343455    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0536643s)
I0601 10:34:37.351699    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:34:37.351699    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:34:38.367605    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:34:38.367605    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0157441s)
I0601 10:34:38.367605    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:34:38.367605    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
W0601 10:34:38.368275    7808 delete.go:139] delete failed (probably ok) <nil>
I0601 10:34:38.368275    7808 fix.go:115] Sleeping 1 second for extra luck!
I0601 10:34:39.369250    7808 start.go:131] createHost starting for "" (driver="docker")
I0601 10:34:39.373571    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0601 10:34:39.374011    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
I0601 10:34:39.374011    7808 client.go:168] LocalClient.Create starting
I0601 10:34:39.374581    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0601 10:34:39.374944    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:34:39.374944    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:34:39.375265    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0601 10:34:39.375378    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:34:39.375493    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:34:39.383991    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:34:40.397320    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:34:40.397505    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0133179s)
I0601 10:34:40.405503    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:34:40.405503    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:34:41.407359    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:34:41.407385    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0017987s)
I0601 10:34:41.407385    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:34:41.407427    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
I0601 10:34:41.414960    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0601 10:34:42.440609    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.025638s)
I0601 10:34:42.458460    7808 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006a20] misses:0}
I0601 10:34:42.458460    7808 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:34:42.458460    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0601 10:34:42.466241    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
W0601 10:34:43.514717    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
I0601 10:34:43.514717    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0482283s)
E0601 10:34:43.514877    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.49.0/24: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
W0601 10:34:43.515077    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 6258869651f85369ce8c18f7218c1ef0b2d0a274032a03de802efa8036fb59a6 (br-6258869651f8): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4

                                                
                                                
I0601 10:34:43.528389    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0601 10:34:44.559674    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0311157s)
I0601 10:34:44.567012    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
W0601 10:34:45.626524    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0601 10:34:45.626773    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0594997s)
I0601 10:34:45.626911    7808 client.go:171] LocalClient.Create took 6.25283s
I0601 10:34:47.640568    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:34:47.646562    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:48.653290    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:48.653290    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0064272s)
I0601 10:34:48.653442    7808 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:48.834436    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:49.863840    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:49.863840    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0291974s)
W0601 10:34:49.863878    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:49.863878    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:49.873617    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:34:49.879584    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:50.894916    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:50.894916    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0153213s)
I0601 10:34:50.894916    7808 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:51.106868    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:52.105249    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
W0601 10:34:52.105249    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:52.105249    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:52.105249    7808 start.go:134] duration metric: createHost completed in 12.7358571s
I0601 10:34:52.114256    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:34:52.119242    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:53.135480    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:53.135506    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0161596s)
I0601 10:34:53.135506    7808 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:53.471929    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:54.491841    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:54.491978    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0199001s)
W0601 10:34:54.492269    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:54.492269    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:54.502421    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:34:54.508470    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:55.532346    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:55.532503    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0237274s)
I0601 10:34:55.532642    7808 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:55.773648    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:34:56.790698    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:34:56.790698    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.017038s)
W0601 10:34:56.790698    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:34:56.790698    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:34:56.790698    7808 fix.go:57] fixHost completed within 48.3531991s
I0601 10:34:56.790698    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 48.3533716s
W0601 10:34:56.791236    7808 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system
W0601 10:34:56.791650    7808 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
I0601 10:34:56.791650    7808 start.go:614] Will try again in 5 seconds ...
I0601 10:35:01.794641    7808 start.go:352] acquiring machines lock for functional-20220601102952-9404: {Name:mkb7180899e96a2b9c65d995d84f5cf4fd14422e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0601 10:35:01.794641    7808 start.go:356] acquired machines lock for "functional-20220601102952-9404" in 0s
I0601 10:35:01.795241    7808 start.go:94] Skipping create...Using existing machine configuration
I0601 10:35:01.795241    7808 fix.go:55] fixHost starting: 
I0601 10:35:01.808965    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:02.855630    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:02.855675    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.046522s)
I0601 10:35:02.855830    7808 fix.go:103] recreateIfNeeded on functional-20220601102952-9404: state= err=unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:02.855852    7808 fix.go:108] machineExists: false. err=machine does not exist
I0601 10:35:02.859654    7808 out.go:177] * docker "functional-20220601102952-9404" container is missing, will recreate.
I0601 10:35:02.873703    7808 delete.go:124] DEMOLISHING functional-20220601102952-9404 ...
I0601 10:35:02.886864    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:03.900015    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:03.900015    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0131393s)
W0601 10:35:03.900015    7808 stop.go:75] unable to get state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:03.900015    7808 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:03.913414    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:04.985562    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:04.985562    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0719939s)
I0601 10:35:04.985759    7808 delete.go:82] Unable to get host status for functional-20220601102952-9404, assuming it has already been deleted: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:04.993499    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:35:06.017445    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:35:06.017445    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.0239353s)
I0601 10:35:06.017445    7808 kic.go:356] could not find the container functional-20220601102952-9404 to remove it. will try anyways
I0601 10:35:06.025004    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:07.032496    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:07.032496    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0074808s)
W0601 10:35:07.032496    7808 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:07.039836    7808 cli_runner.go:164] Run: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0"
W0601 10:35:08.054132    7808 cli_runner.go:211] docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0" returned with exit code 1
I0601 10:35:08.054132    7808 cli_runner.go:217] Completed: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": (1.0142843s)
I0601 10:35:08.054132    7808 oci.go:625] error shutdown functional-20220601102952-9404: docker exec --privileged -t functional-20220601102952-9404 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:09.077289    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:10.092714    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:10.092748    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0151875s)
I0601 10:35:10.092748    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:10.092748    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:10.092748    7808 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:10.586899    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:11.615443    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:11.615443    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0285323s)
I0601 10:35:11.615443    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:11.615443    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:11.615443    7808 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:12.225197    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:13.249992    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:13.250108    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0246992s)
I0601 10:35:13.250180    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:13.250180    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:13.250248    7808 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:14.152944    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:15.200613    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:15.200810    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0476571s)
I0601 10:35:15.200885    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:15.200885    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:15.200885    7808 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:17.211889    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:18.250481    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:18.250481    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.038581s)
I0601 10:35:18.250481    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:18.250481    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:18.250481    7808 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:20.080903    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:21.088683    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:21.088683    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0077695s)
I0601 10:35:21.088900    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:21.088900    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:21.088929    7808 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:23.779492    7808 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
W0601 10:35:24.852209    7808 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
I0601 10:35:24.852209    7808 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (1.0727047s)
I0601 10:35:24.852328    7808 oci.go:637] temporary error verifying shutdown: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:24.852328    7808 oci.go:639] temporary error: container functional-20220601102952-9404 status is  but expect it to be exited
I0601 10:35:24.852422    7808 oci.go:88] couldn't shut down functional-20220601102952-9404 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
I0601 10:35:24.859843    7808 cli_runner.go:164] Run: docker rm -f -v functional-20220601102952-9404
I0601 10:35:25.885589    7808 cli_runner.go:217] Completed: docker rm -f -v functional-20220601102952-9404: (1.0256621s)
I0601 10:35:25.892104    7808 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220601102952-9404
W0601 10:35:26.929997    7808 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220601102952-9404 returned with exit code 1
I0601 10:35:26.929997    7808 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} functional-20220601102952-9404: (1.037881s)
I0601 10:35:26.937919    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:35:27.992129    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:35:27.992129    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0540189s)
I0601 10:35:27.999625    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:35:27.999625    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:35:29.031726    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:35:29.031726    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0320891s)
I0601 10:35:29.031726    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:35:29.031726    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
W0601 10:35:29.031726    7808 delete.go:139] delete failed (probably ok) <nil>
I0601 10:35:29.031726    7808 fix.go:115] Sleeping 1 second for extra luck!
I0601 10:35:30.043662    7808 start.go:131] createHost starting for "" (driver="docker")
I0601 10:35:30.047377    7808 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0601 10:35:30.048207    7808 start.go:165] libmachine.API.Create for "functional-20220601102952-9404" (driver="docker")
I0601 10:35:30.048207    7808 client.go:168] LocalClient.Create starting
I0601 10:35:30.049295    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0601 10:35:30.049295    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:35:30.049295    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:35:30.049824    7808 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0601 10:35:30.049935    7808 main.go:134] libmachine: Decoding PEM data...
I0601 10:35:30.050027    7808 main.go:134] libmachine: Parsing certificate...
I0601 10:35:30.058751    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0601 10:35:31.060454    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 10:35:31.060454    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0015144s)
I0601 10:35:31.069123    7808 network_create.go:272] running [docker network inspect functional-20220601102952-9404] to gather additional debugging logs...
I0601 10:35:31.069123    7808 cli_runner.go:164] Run: docker network inspect functional-20220601102952-9404
W0601 10:35:32.124043    7808 cli_runner.go:211] docker network inspect functional-20220601102952-9404 returned with exit code 1
I0601 10:35:32.124043    7808 cli_runner.go:217] Completed: docker network inspect functional-20220601102952-9404: (1.0549091s)
I0601 10:35:32.124043    7808 network_create.go:275] error running [docker network inspect functional-20220601102952-9404]: docker network inspect functional-20220601102952-9404: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220601102952-9404
I0601 10:35:32.124043    7808 network_create.go:277] output of [docker network inspect functional-20220601102952-9404]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220601102952-9404

                                                
                                                
** /stderr **
I0601 10:35:32.132197    7808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0601 10:35:33.168524    7808 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.036316s)
I0601 10:35:33.185998    7808 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:false}} dirty:map[] misses:0}
I0601 10:35:33.185998    7808 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:35:33.201005    7808 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006a20] amended:true}} dirty:map[192.168.49.0:0xc000006a20 192.168.58.0:0xc0008a8900] misses:0}
I0601 10:35:33.201005    7808 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 10:35:33.201005    7808 network_create.go:115] attempt to create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0601 10:35:33.207804    7808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404
W0601 10:35:34.245342    7808 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404 returned with exit code 1
I0601 10:35:34.245475    7808 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: (1.0375266s)
E0601 10:35:34.245475    7808 network_create.go:104] error while trying to create docker network functional-20220601102952-9404 192.168.58.0/24: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
W0601 10:35:34.245825    7808 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220601102952-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network ca6b4559829ad63693afe8d6cf750b3a62b8b574f6557884a5f42b321bfb9e73 (br-ca6b4559829a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4

                                                
                                                
I0601 10:35:34.259541    7808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0601 10:35:35.279439    7808 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0198869s)
I0601 10:35:35.286316    7808 cli_runner.go:164] Run: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true
W0601 10:35:36.339862    7808 cli_runner.go:211] docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0601 10:35:36.340098    7808 cli_runner.go:217] Completed: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0535347s)
I0601 10:35:36.340098    7808 client.go:171] LocalClient.Create took 6.2918212s
I0601 10:35:38.354197    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:35:38.360195    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:39.397940    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:39.397940    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0377337s)
I0601 10:35:39.398203    7808 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:39.684417    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:40.702458    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:40.702458    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0180291s)
W0601 10:35:40.702458    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:40.702458    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:40.712807    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:35:40.717777    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:41.731036    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:41.731036    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0132475s)
I0601 10:35:41.731036    7808 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:41.941378    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:42.964908    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:42.964958    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0234536s)
W0601 10:35:42.965267    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:42.965306    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:42.965306    7808 start.go:134] duration metric: createHost completed in 12.9215001s
I0601 10:35:42.978130    7808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0601 10:35:42.985120    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:44.021610    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:44.021610    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0364783s)
I0601 10:35:44.021610    7808 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:44.357752    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:45.365338    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:45.365459    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0074332s)
W0601 10:35:45.365459    7808 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:45.365459    7808 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:45.374137    7808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0601 10:35:45.380143    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:46.425446    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:46.425446    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0452913s)
I0601 10:35:46.425446    7808 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:46.781763    7808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404
W0601 10:35:47.786123    7808 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404 returned with exit code 1
I0601 10:35:47.786123    7808 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: (1.0043485s)
W0601 10:35:47.786123    7808 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404

                                                
                                                
W0601 10:35:47.786123    7808 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220601102952-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220601102952-9404: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220601102952-9404
I0601 10:35:47.786123    7808 fix.go:57] fixHost completed within 45.9903711s
I0601 10:35:47.786123    7808 start.go:81] releasing machines lock for "functional-20220601102952-9404", held for 45.9909708s
W0601 10:35:47.786997    7808 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220601102952-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
I0601 10:35:47.810496    7808 out.go:177] 
W0601 10:35:47.813946    7808 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220601102952-9404 container: docker volume create functional-20220601102952-9404 --label name.minikube.sigs.k8s.io=functional-20220601102952-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220601102952-9404: error while creating volume root path '/var/lib/docker/volumes/functional-20220601102952-9404': mkdir /var/lib/docker/volumes/functional-20220601102952-9404: read-only file system

                                                
                                                
W0601 10:35:47.813946    7808 out.go:239] * Suggestion: Restart Docker
W0601 10:35:47.814464    7808 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0601 10:35:47.819128    7808 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (13.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status: exit status 7 (3.012288s)

                                                
                                                
-- stdout --
	functional-20220601102952-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:38.366856    6476 status.go:258] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	E0601 10:36:38.366887    6476 status.go:261] The "functional-20220601102952-9404" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:848: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status" : exit status 7
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (3.0402444s)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:41.407009    7860 status.go:258] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	E0601 10:36:41.407009    7860 status.go:261] The "functional-20220601102952-9404" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:854: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -o json: exit status 7 (3.1015115s)

                                                
                                                
-- stdout --
	{"Name":"functional-20220601102952-9404","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:44.509105    5820 status.go:258] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	E0601 10:36:44.509105    5820 status.go:261] The "functional-20220601102952-9404" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:866: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1465534s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0349145s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:48.701935    3636 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (13.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601102952-9404 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1432: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (292.6806ms)

                                                
                                                
** stderr ** 
	W0601 10:36:30.183810    6964 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220601102952-9404" does not exist

                                                
                                                
** /stderr **
functional_test.go:1436: failed to create hello-node deployment with this command "kubectl --context functional-20220601102952-9404 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220601102952-9404 describe po hello-node
functional_test.go:1405: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 describe po hello-node: exit status 1 (295.6235ms)

                                                
                                                
** stderr ** 
	W0601 10:36:30.482124    4208 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1407: "kubectl --context functional-20220601102952-9404 describe po hello-node" failed: exit status 1
functional_test.go:1409: hello-node pod describe:
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220601102952-9404 logs -l app=hello-node
functional_test.go:1411: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 logs -l app=hello-node: exit status 1 (305.2978ms)

                                                
                                                
** stderr ** 
	W0601 10:36:30.795608    1728 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1413: "kubectl --context functional-20220601102952-9404 logs -l app=hello-node" failed: exit status 1
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220601102952-9404 describe svc hello-node
functional_test.go:1417: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 describe svc hello-node: exit status 1 (294.5365ms)

                                                
                                                
** stderr ** 
	W0601 10:36:31.100141    5908 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1419: "kubectl --context functional-20220601102952-9404 describe svc hello-node" failed: exit status 1
functional_test.go:1421: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1556316s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0407938s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:35.353793    9204 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (5.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601102952-9404 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8: exit status 1 (294.694ms)

                                                
                                                
** stderr ** 
	W0601 10:36:27.125018    8756 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220601102952-9404" does not exist

                                                
                                                
** /stderr **
functional_test.go:1562: failed to create hello-node deployment with this command "kubectl --context functional-20220601102952-9404 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1527: service test failed - dumping debug information
functional_test.go:1528: -----------------------service failure post-mortem--------------------------------
functional_test.go:1531: (dbg) Run:  kubectl --context functional-20220601102952-9404 describe po hello-node-connect
functional_test.go:1531: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 describe po hello-node-connect: exit status 1 (318.858ms)

                                                
                                                
** stderr ** 
	W0601 10:36:27.442114    7992 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1533: "kubectl --context functional-20220601102952-9404 describe po hello-node-connect" failed: exit status 1
functional_test.go:1535: hello-node pod describe:
functional_test.go:1537: (dbg) Run:  kubectl --context functional-20220601102952-9404 logs -l app=hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1537: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 logs -l app=hello-node-connect: exit status 1 (321.3453ms)

                                                
                                                
** stderr ** 
	W0601 10:36:27.780718    3700 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1539: "kubectl --context functional-20220601102952-9404 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1541: hello-node logs:
functional_test.go:1543: (dbg) Run:  kubectl --context functional-20220601102952-9404 describe svc hello-node-connect
functional_test.go:1543: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 describe svc hello-node-connect: exit status 1 (309.3694ms)

                                                
                                                
** stderr ** 
	W0601 10:36:28.094056    9280 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1545: "kubectl --context functional-20220601102952-9404 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1547: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.2168275s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0166781s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:32.407756    4216 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (5.52s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-20220601102952-9404" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.198197s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (2.9990409s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:08.195373    9584 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (11.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "echo hello": exit status 80 (3.3542766s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_19232f4b01a263c7fe4da55009757983856b4b95_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1659: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"echo hello\"" : exit status 80
functional_test.go:1663: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"echo hello\""
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "cat /etc/hostname": exit status 80 (3.4454064s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_38bcdef24fb924cc90e97c11e7d475c51b658987_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1677: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1681: expected minikube ssh command output to be -"functional-20220601102952-9404"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1944891s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0490236s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:26.876953    9108 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (11.05s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp testdata\cp-test.txt /home/docker/cp-test.txt: exit status 80 (3.3089723s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_61e6e7c82587b4e90872857c87eff14ac40e447c_1.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp testdata\\cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.3762777s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_12.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
  )
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp functional-20220601102952-9404:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1643454593\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp functional-20220601102952-9404:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1643454593\001\cp-test.txt: exit status 80 (3.2705433s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                        │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                      │
	│                                                                                                                    │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                           │
	│    * Please also attach the following file to the GitHub issue:                                                    │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cp_02fee327c4360102a00caf48406395e953460914_0.log    │
	│                                                                                                                    │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cp functional-20220601102952-9404:/home/docker/cp-test.txt C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\TestFunctionalparallelCpCmd1643454593\\001\\cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 "sudo cat /home/docker/cp-test.txt": exit status 80 (3.3048704s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_12.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh -n functional-20220601102952-9404 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:526: failed to read test file 'testdata/cp-test.txt' : open C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1643454593\001\cp-test.txt: The system cannot find the file specified.
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"\n\n",
+ 	"",
  )
--- FAIL: TestFunctional/parallel/CpCmd (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601102952-9404 replace --force -f testdata\mysql.yaml
functional_test.go:1719: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 replace --force -f testdata\mysql.yaml: exit status 1 (293.2129ms)

                                                
                                                
** stderr ** 
	W0601 10:36:08.859316    9388 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220601102952-9404" does not exist

                                                
                                                
** /stderr **
functional_test.go:1721: failed to kubectl replace mysql: args "kubectl --context functional-20220601102952-9404 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.217108s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0735004s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:13.200421    9648 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/9404/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/test/nested/copy/9404/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/test/nested/copy/9404/hosts": exit status 80 (3.3824465s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fa2e1d639ba992139c0500002fdd70b8017b15b7_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1859: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/test/nested/copy/9404/hosts" failed: exit status 80
functional_test.go:1862: file sync test content: 

                                                
                                                
functional_test.go:1872: /etc/sync.test content mismatch (-want +got):
  string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1757866s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.058283s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:15.837855    1700 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (24.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/9404.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/9404.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/9404.pem": exit status 80 (3.3625415s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_b736a5cd720e85269aa210e46738fe5e2039b326_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/9404.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /etc/ssl/certs/9404.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/9404.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/9404.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /usr/share/ca-certificates/9404.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /usr/share/ca-certificates/9404.pem": exit status 80 (3.2534707s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_a7091e934d9f994c8de498bc45d39cb7f756848a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/usr/share/ca-certificates/9404.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /usr/share/ca-certificates/9404.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/9404.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (3.3033421s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fea49abfab0323d8512b535581403500420d48f0_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /etc/ssl/certs/94042.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/94042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/94042.pem": exit status 80 (3.4520226s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_35c7bf4be45162324775ef4aed5beac7e3221fb3_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/94042.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /etc/ssl/certs/94042.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/94042.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/94042.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /usr/share/ca-certificates/94042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /usr/share/ca-certificates/94042.pem": exit status 80 (3.3524256s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d4df0abe127dd78c53972fce00fa594ec5a109a7_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/usr/share/ca-certificates/94042.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /usr/share/ca-certificates/94042.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/94042.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (3.3850702s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.2153718s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.068399s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:31.876638    2276 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (24.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601102952-9404 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Non-zero exit: kubectl --context functional-20220601102952-9404 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (345.8167ms)

                                                
                                                
** stderr ** 
	W0601 10:36:04.265888    4556 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:216: failed to 'kubectl get nodes' with args "kubectl --context functional-20220601102952-9404 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:222: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W0601 10:36:04.265888    4556 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W0601 10:36:04.265888    4556 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W0601 10:36:04.265888    4556 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W0601 10:36:04.265888    4556 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220601102952-9404
	* cluster has no server defined

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220601102952-9404: exit status 1 (1.1799538s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220601102952-9404 -n functional-20220601102952-9404: exit status 7 (3.0724263s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:36:08.617141    2212 status.go:247] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220601102952-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh "sudo systemctl is-active crio": exit status 80 (3.4847026s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1956: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1959: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 version -o=json --components: exit status 80 (3.3044586s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2198: error version: exit status 80
functional_test.go:2203: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220601102952-9404"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220601102952-9404": exit status 1 (9.4094832s)

                                                
                                                
-- stdout --
	functional-20220601102952-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_3.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check 
	the spelling of the name, or if a path was included, verify that the path is correct and try again.
	At line:1 char:1
	+ false exit code 80
	+ ~~~~~
	    + CategoryInfo          : ObjectNotFound: (false:String) [], CommandNotFoundException
	    + FullyQualifiedErrorId : CommandNotFoundException
	 
	E0601 10:36:15.652047    3444 status.go:258] status error: host: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	E0601 10:36:15.652047    3444 status.go:261] The "functional-20220601102952-9404" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:497: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2: exit status 80 (3.329051s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:50.237489   10128 out.go:296] Setting OutFile to fd 676 ...
	I0601 10:36:50.305428   10128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:50.305428   10128 out.go:309] Setting ErrFile to fd 812...
	I0601 10:36:50.305428   10128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:50.319326   10128 mustload.go:65] Loading cluster: functional-20220601102952-9404
	I0601 10:36:50.320045   10128 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:36:50.333917   10128 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:36:53.023981   10128 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:36:53.024153   10128 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (2.6897195s)
	I0601 10:36:53.027944   10128 out.go:177] 
	W0601 10:36:53.030103   10128 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:36:53.030103   10128 out.go:239] * 
	* 
	W0601 10:36:53.291805   10128 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 10:36:53.295892   10128 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2: exit status 80 (3.2369217s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:52.704359    7720 out.go:296] Setting OutFile to fd 644 ...
	I0601 10:36:52.775311    7720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:52.775311    7720 out.go:309] Setting ErrFile to fd 716...
	I0601 10:36:52.775432    7720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:52.790445    7720 mustload.go:65] Loading cluster: functional-20220601102952-9404
	I0601 10:36:52.791908    7720 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:36:52.814939    7720 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:36:55.398800    7720 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:36:55.398800    7720 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (2.5838326s)
	I0601 10:36:55.404631    7720 out.go:177] 
	W0601 10:36:55.407342    7720 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:36:55.407342    7720 out.go:239] * 
	* 
	W0601 10:36:55.663346    7720 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 10:36:55.670214    7720 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2: exit status 80 (3.3196844s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:51.161005    5936 out.go:296] Setting OutFile to fd 628 ...
	I0601 10:36:51.228925    5936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:51.228925    5936 out.go:309] Setting ErrFile to fd 840...
	I0601 10:36:51.228925    5936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:51.242119    5936 mustload.go:65] Loading cluster: functional-20220601102952-9404
	I0601 10:36:51.242780    5936 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:36:51.257031    5936 cli_runner.go:164] Run: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}
	W0601 10:36:53.889387    5936 cli_runner.go:211] docker container inspect functional-20220601102952-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:36:53.889580    5936 cli_runner.go:217] Completed: docker container inspect functional-20220601102952-9404 --format={{.State.Status}}: (2.6321199s)
	I0601 10:36:53.901160    5936 out.go:177] 
	W0601 10:36:53.904490    5936 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	W0601 10:36:53.904490    5936 out.go:239] * 
	* 
	W0601 10:36:54.198332    5936 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_4.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 10:36:54.201602    5936 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220601102952-9404 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20220601102952-9404": client config: context "functional-20220601102952-9404" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format short: (2.9990585s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format short:

                                                
                                                
functional_test.go:270: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format table: (2.9072076s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format table:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:270: expected | k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format json: (3.0345585s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format json:
[]
functional_test.go:270: expected ["k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format yaml: (2.9853504s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls --format yaml:
[]

                                                
                                                
functional_test.go:270: expected - k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 ssh pgrep buildkitd: exit status 80 (3.2985349s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220601102952-9404": docker container inspect functional-20220601102952-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220601102952-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f5578f3b7737bbd9a15ad6eab50db6197ebdaf5a_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image build -t localhost/my-image:functional-20220601102952-9404 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image build -t localhost/my-image:functional-20220601102952-9404 testdata\build: (2.9435022s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls: (2.9339342s)
functional_test.go:438: expected "localhost/my-image:functional-20220601102952-9404" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.8: exit status 1 (2.1780916s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:339: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102952-9404: (3.3349255s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls: (3.1600457s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220601102952-9404" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102952-9404: (3.244124s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls: (3.0527822s)
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220601102952-9404" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.9: exit status 1 (2.1620853s)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:232: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image save gcr.io/google-containers/addon-resizer:functional-20220601102952-9404 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image save gcr.io/google-containers/addon-resizer:functional-20220601102952-9404 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (3.1093142s)
functional_test.go:381: expected "C:\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: exit status 80 (2.3435946s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:406: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Non-zero exit: docker rmi gcr.io/google-containers/addon-resizer:functional-20220601102952-9404: exit status 1 (1.1416174s)

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
** /stderr **
functional_test.go:416: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220601104200-9404 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220601104200-9404 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 60 (1m17.5823487s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220601104200-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220601104200-9404 in cluster ingress-addon-legacy-20220601104200-9404
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20220601104200-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:42:00.583687    9744 out.go:296] Setting OutFile to fd 820 ...
	I0601 10:42:00.640754    9744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:42:00.640754    9744 out.go:309] Setting ErrFile to fd 960...
	I0601 10:42:00.640754    9744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:42:00.655765    9744 out.go:303] Setting JSON to false
	I0601 10:42:00.657979    9744 start.go:115] hostinfo: {"hostname":"minikube2","uptime":12056,"bootTime":1654068064,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:42:00.657979    9744 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:42:00.662338    9744 out.go:177] * [ingress-addon-legacy-20220601104200-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:42:00.666172    9744 notify.go:193] Checking for updates...
	I0601 10:42:00.667824    9744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:42:00.670859    9744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:42:00.673325    9744 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:42:00.675707    9744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:42:00.677366    9744 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:42:03.240255    9744 docker.go:137] docker version: linux-20.10.14
	I0601 10:42:03.250915    9744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:42:05.296076    9744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0451383s)
	I0601 10:42:05.296076    9744 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 10:42:04.2933219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:42:05.305920    9744 out.go:177] * Using the docker driver based on user configuration
	I0601 10:42:05.306960    9744 start.go:284] selected driver: docker
	I0601 10:42:05.306960    9744 start.go:806] validating driver "docker" against <nil>
	I0601 10:42:05.306960    9744 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:42:05.437310    9744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:42:07.458478    9744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0211459s)
	I0601 10:42:07.458478    9744 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 10:42:06.4467258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:42:07.459287    9744 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:42:07.460045    9744 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 10:42:07.465033    9744 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 10:42:07.467172    9744 cni.go:95] Creating CNI manager for ""
	I0601 10:42:07.467172    9744 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:42:07.467172    9744 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220601104200-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601104200-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:42:07.471727    9744 out.go:177] * Starting control plane node ingress-addon-legacy-20220601104200-9404 in cluster ingress-addon-legacy-20220601104200-9404
	I0601 10:42:07.474146    9744 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:42:07.477714    9744 out.go:177] * Pulling base image ...
	I0601 10:42:07.482121    9744 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 10:42:07.482121    9744 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:42:07.535244    9744 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 10:42:07.535244    9744 cache.go:57] Caching tarball of preloaded images
	I0601 10:42:07.535813    9744 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 10:42:07.539692    9744 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0601 10:42:07.542400    9744 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:42:07.612287    9744 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 10:42:08.585854    9744 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:42:08.585854    9744 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:42:08.585854    9744 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:42:08.585854    9744 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:42:08.586383    9744 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:42:08.586468    9744 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:42:08.586555    9744 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 10:42:08.586678    9744 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 10:42:08.586755    9744 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:42:10.915962    9744 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-332942778: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-332942778: read-only file system"}
	I0601 10:42:10.915962    9744 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 10:42:12.285959    9744 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:42:12.286968    9744 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:42:13.460127    9744 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0601 10:42:13.461136    9744 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601104200-9404\config.json ...
	I0601 10:42:13.461136    9744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220601104200-9404\config.json: {Name:mk5647a09cb49f61eee77a1289afb9677f1edc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:42:13.463097    9744 cache.go:206] Successfully downloaded all kic artifacts
	I0601 10:42:13.463097    9744 start.go:352] acquiring machines lock for ingress-addon-legacy-20220601104200-9404: {Name:mk9185e62d59160e595bffa3c0d5147301a27e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:42:13.463097    9744 start.go:356] acquired machines lock for "ingress-addon-legacy-20220601104200-9404" in 0s
	I0601 10:42:13.463097    9744 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220601104200-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202206011
04200-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 10:42:13.463809    9744 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:42:13.605208    9744 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0601 10:42:13.606006    9744 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220601104200-9404" (driver="docker")
	I0601 10:42:13.606006    9744 client.go:168] LocalClient.Create starting
	I0601 10:42:13.606932    9744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:42:13.607146    9744 main.go:134] libmachine: Decoding PEM data...
	I0601 10:42:13.607146    9744 main.go:134] libmachine: Parsing certificate...
	I0601 10:42:13.607277    9744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:42:13.607277    9744 main.go:134] libmachine: Decoding PEM data...
	I0601 10:42:13.607277    9744 main.go:134] libmachine: Parsing certificate...
	I0601 10:42:13.616410    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:42:14.651751    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:42:14.651751    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0353293s)
	I0601 10:42:14.659750    9744 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220601104200-9404] to gather additional debugging logs...
	I0601 10:42:14.659750    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404
	W0601 10:42:15.682488    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:15.994743    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404: (1.0227261s)
	I0601 10:42:15.994743    9744 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220601104200-9404]: docker network inspect ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:15.994743    9744 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220601104200-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	
	** /stderr **
	I0601 10:42:16.005480    9744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:42:17.044006    9744 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0384756s)
	I0601 10:42:17.065639    9744 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000062a0] misses:0}
	I0601 10:42:17.065639    9744 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:42:17.065639    9744 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220601104200-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 10:42:17.074354    9744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404
	W0601 10:42:18.078340    9744 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:18.078509    9744 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: (1.0039232s)
	E0601 10:42:18.078657    9744 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220601104200-9404 192.168.49.0/24: create docker network ingress-addon-legacy-20220601104200-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cf29beadd4af35684fb98d0908f2060ed94bd4009454f4f60c93055abec5cde0 (br-cf29beadd4af): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 10:42:18.078719    9744 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220601104200-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cf29beadd4af35684fb98d0908f2060ed94bd4009454f4f60c93055abec5cde0 (br-cf29beadd4af): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220601104200-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cf29beadd4af35684fb98d0908f2060ed94bd4009454f4f60c93055abec5cde0 (br-cf29beadd4af): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 10:42:18.091352    9744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:42:19.109745    9744 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.018148s)
	I0601 10:42:19.115429    9744 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:42:20.123411    9744 cli_runner.go:211] docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:42:20.123480    9744 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0077422s)
	I0601 10:42:20.123544    9744 client.go:171] LocalClient.Create took 6.5174664s
	I0601 10:42:22.146244    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:42:22.153328    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:42:23.160883    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:23.160883    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0075435s)
	I0601 10:42:23.160883    9744 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:23.455569    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:42:24.479460    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:24.479460    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0237538s)
	W0601 10:42:24.479902    9744 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:42:24.479902    9744 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:24.489835    9744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:42:24.495931    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:42:25.510615    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:25.510701    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0145704s)
	I0601 10:42:25.510815    9744 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:25.812278    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:42:26.838082    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:26.838420    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0257926s)
	W0601 10:42:26.838489    9744 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:42:26.838489    9744 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:26.838489    9744 start.go:134] duration metric: createHost completed in 13.3745318s
	I0601 10:42:26.838489    9744 start.go:81] releasing machines lock for "ingress-addon-legacy-20220601104200-9404", held for 13.3752439s
	W0601 10:42:26.838489    9744 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	I0601 10:42:26.851693    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:27.871256    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:27.871256    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.019552s)
	I0601 10:42:27.871256    9744 delete.go:82] Unable to get host status for ingress-addon-legacy-20220601104200-9404, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	W0601 10:42:27.871256    9744 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	I0601 10:42:27.871256    9744 start.go:614] Will try again in 5 seconds ...
	I0601 10:42:32.884913    9744 start.go:352] acquiring machines lock for ingress-addon-legacy-20220601104200-9404: {Name:mk9185e62d59160e595bffa3c0d5147301a27e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:42:32.885347    9744 start.go:356] acquired machines lock for "ingress-addon-legacy-20220601104200-9404" in 317µs
	I0601 10:42:32.885605    9744 start.go:94] Skipping create...Using existing machine configuration
	I0601 10:42:32.885605    9744 fix.go:55] fixHost starting: 
	I0601 10:42:32.892085    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:33.965985    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:33.965985    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0738882s)
	I0601 10:42:33.965985    9744 fix.go:103] recreateIfNeeded on ingress-addon-legacy-20220601104200-9404: state= err=unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:33.965985    9744 fix.go:108] machineExists: false. err=machine does not exist
	I0601 10:42:33.969734    9744 out.go:177] * docker "ingress-addon-legacy-20220601104200-9404" container is missing, will recreate.
	I0601 10:42:33.972686    9744 delete.go:124] DEMOLISHING ingress-addon-legacy-20220601104200-9404 ...
	I0601 10:42:33.978672    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:35.028544    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:35.028544    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0498605s)
	W0601 10:42:35.028544    9744 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:35.028544    9744 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:35.042906    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:36.088995    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:36.089043    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0457725s)
	I0601 10:42:36.089134    9744 delete.go:82] Unable to get host status for ingress-addon-legacy-20220601104200-9404, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:36.096930    9744 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404
	W0601 10:42:37.106314    9744 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:37.106346    9744 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404: (1.0091913s)
	I0601 10:42:37.106421    9744 kic.go:356] could not find the container ingress-addon-legacy-20220601104200-9404 to remove it. will try anyways
	I0601 10:42:37.114208    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:38.122828    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:38.122828    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0086087s)
	W0601 10:42:38.122828    9744 oci.go:84] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:38.129677    9744 cli_runner.go:164] Run: docker exec --privileged -t ingress-addon-legacy-20220601104200-9404 /bin/bash -c "sudo init 0"
	W0601 10:42:39.167277    9744 cli_runner.go:211] docker exec --privileged -t ingress-addon-legacy-20220601104200-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 10:42:39.167349    9744 cli_runner.go:217] Completed: docker exec --privileged -t ingress-addon-legacy-20220601104200-9404 /bin/bash -c "sudo init 0": (1.0374982s)
	I0601 10:42:39.167349    9744 oci.go:625] error shutdown ingress-addon-legacy-20220601104200-9404: docker exec --privileged -t ingress-addon-legacy-20220601104200-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:40.185824    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:41.195432    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:41.195463    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0095203s)
	I0601 10:42:41.195463    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:41.195463    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:41.195463    9744 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:41.671511    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:42.677493    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:42.677557    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0058617s)
	I0601 10:42:42.677617    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:42.677617    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:42.677687    9744 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:43.585143    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:44.607638    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:44.607747    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0224836s)
	I0601 10:42:44.607964    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:44.607964    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:44.607964    9744 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:45.254726    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:46.253615    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:46.253721    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:46.253883    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:46.253959    9744 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:47.371530    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:48.373581    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:48.373581    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0020407s)
	I0601 10:42:48.373581    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:48.373581    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:48.373581    9744 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:49.904775    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:50.912410    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:50.912410    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0076244s)
	I0601 10:42:50.912410    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:50.912410    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:50.912410    9744 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:53.962446    9744 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:42:54.984729    9744 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:42:54.984729    9744 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (1.0222721s)
	I0601 10:42:54.984729    9744 oci.go:637] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:54.984729    9744 oci.go:639] temporary error: container ingress-addon-legacy-20220601104200-9404 status is  but expect it to be exited
	I0601 10:42:54.984729    9744 oci.go:88] couldn't shut down ingress-addon-legacy-20220601104200-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	 
	I0601 10:42:54.992680    9744 cli_runner.go:164] Run: docker rm -f -v ingress-addon-legacy-20220601104200-9404
	I0601 10:42:55.999211    9744 cli_runner.go:217] Completed: docker rm -f -v ingress-addon-legacy-20220601104200-9404: (1.0062871s)
	I0601 10:42:56.007274    9744 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404
	W0601 10:42:57.015580    9744 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:57.015580    9744 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} ingress-addon-legacy-20220601104200-9404: (1.0082939s)
	I0601 10:42:57.021582    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:42:58.047510    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:42:58.047510    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0259172s)
	I0601 10:42:58.055017    9744 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220601104200-9404] to gather additional debugging logs...
	I0601 10:42:58.055017    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404
	W0601 10:42:59.080520    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:42:59.080520    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404: (1.0254912s)
	I0601 10:42:59.080520    9744 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220601104200-9404]: docker network inspect ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	I0601 10:42:59.080520    9744 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220601104200-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	
	** /stderr **
	W0601 10:42:59.082060    9744 delete.go:139] delete failed (probably ok) <nil>
	I0601 10:42:59.082060    9744 fix.go:115] Sleeping 1 second for extra luck!
	I0601 10:43:00.091388    9744 start.go:131] createHost starting for "" (driver="docker")
	I0601 10:43:00.096889    9744 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0601 10:43:00.097598    9744 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220601104200-9404" (driver="docker")
	I0601 10:43:00.097598    9744 client.go:168] LocalClient.Create starting
	I0601 10:43:00.098172    9744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 10:43:00.098399    9744 main.go:134] libmachine: Decoding PEM data...
	I0601 10:43:00.098399    9744 main.go:134] libmachine: Parsing certificate...
	I0601 10:43:00.098399    9744 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 10:43:00.098972    9744 main.go:134] libmachine: Decoding PEM data...
	I0601 10:43:00.098972    9744 main.go:134] libmachine: Parsing certificate...
	I0601 10:43:00.106339    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 10:43:01.139872    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 10:43:01.139872    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0332806s)
	I0601 10:43:01.146849    9744 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220601104200-9404] to gather additional debugging logs...
	I0601 10:43:01.146849    9744 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601104200-9404
	W0601 10:43:02.201476    9744 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:02.201672    9744 cli_runner.go:217] Completed: docker network inspect ingress-addon-legacy-20220601104200-9404: (1.0546156s)
	I0601 10:43:02.201724    9744 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220601104200-9404]: docker network inspect ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:02.201781    9744 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220601104200-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220601104200-9404
	
	** /stderr **
	I0601 10:43:02.212276    9744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 10:43:03.286623    9744 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0742174s)
	I0601 10:43:03.305462    9744 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062a0] amended:false}} dirty:map[] misses:0}
	I0601 10:43:03.305858    9744 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:43:03.330074    9744 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062a0] amended:true}} dirty:map[192.168.49.0:0xc0000062a0 192.168.58.0:0xc000a514c8] misses:0}
	I0601 10:43:03.331201    9744 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 10:43:03.331201    9744 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220601104200-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 10:43:03.339699    9744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404
	W0601 10:43:04.393040    9744 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:04.393137    9744 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: (1.0531234s)
	E0601 10:43:04.393168    9744 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220601104200-9404 192.168.58.0/24: create docker network ingress-addon-legacy-20220601104200-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3d8d2d2849df6abebd411b4f4a9d0817db8e82e80ba05c9efe0e0a5e50e860f7 (br-3d8d2d2849df): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 10:43:04.393263    9744 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220601104200-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3d8d2d2849df6abebd411b4f4a9d0817db8e82e80ba05c9efe0e0a5e50e860f7 (br-3d8d2d2849df): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220601104200-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3d8d2d2849df6abebd411b4f4a9d0817db8e82e80ba05c9efe0e0a5e50e860f7 (br-3d8d2d2849df): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 10:43:04.407862    9744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 10:43:05.468324    9744 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0604504s)
	I0601 10:43:05.475238    9744 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 10:43:06.523335    9744 cli_runner.go:211] docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 10:43:06.523385    9744 cli_runner.go:217] Completed: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0479675s)
	I0601 10:43:06.523583    9744 client.go:171] LocalClient.Create took 6.4258689s
	I0601 10:43:08.542693    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:43:08.548435    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:09.559996    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:09.559996    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0115161s)
	I0601 10:43:09.559996    9744 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:09.907382    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:10.919897    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:10.919897    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0115409s)
	W0601 10:43:10.919897    9744 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:43:10.919897    9744 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:10.930485    9744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:43:10.936648    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:11.992966    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:11.993134    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0563063s)
	I0601 10:43:11.993291    9744 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:12.225635    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:13.279517    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:13.279517    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0537302s)
	W0601 10:43:13.279879    9744 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:43:13.279879    9744 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:13.279879    9744 start.go:134] duration metric: createHost completed in 13.1883458s
	I0601 10:43:13.289103    9744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:43:13.295139    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:14.339647    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:14.339647    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0443808s)
	I0601 10:43:14.339950    9744 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:14.594268    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:15.605154    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:15.605203    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0107602s)
	W0601 10:43:15.605405    9744 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:43:15.605494    9744 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:15.615581    9744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 10:43:15.620964    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:16.632312    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	I0601 10:43:16.632312    9744 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: (1.0113369s)
	I0601 10:43:16.632312    9744 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:16.850841    9744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404
	W0601 10:43:17.848575    9744 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404 returned with exit code 1
	W0601 10:43:17.848881    9744 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	
	W0601 10:43:17.848957    9744 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220601104200-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601104200-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	I0601 10:43:17.848985    9744 fix.go:57] fixHost completed within 44.9628822s
	I0601 10:43:17.848985    9744 start.go:81] releasing machines lock for "ingress-addon-legacy-20220601104200-9404", held for 44.9631398s
	W0601 10:43:17.849741    9744 out.go:239] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220601104200-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220601104200-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	I0601 10:43:17.854477    9744 out.go:177] 
	W0601 10:43:17.857388    9744 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220601104200-9404 container: docker volume create ingress-addon-legacy-20220601104200-9404 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601104200-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220601104200-9404: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220601104200-9404: read-only file system
	
	W0601 10:43:17.857566    9744 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 10:43:17.857566    9744 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 10:43:17.861576    9744 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220601104200-9404 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 60
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (77.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601104200-9404 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601104200-9404 addons enable ingress --alsologtostderr -v=5: exit status 10 (3.0900592s)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:43:18.248626    8624 out.go:296] Setting OutFile to fd 644 ...
	I0601 10:43:18.312090    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:43:18.312090    8624 out.go:309] Setting ErrFile to fd 872...
	I0601 10:43:18.312090    8624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:43:18.324251    8624 config.go:178] Loaded profile config "ingress-addon-legacy-20220601104200-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 10:43:18.324251    8624 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220601104200-9404"
	I0601 10:43:18.324251    8624 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220601104200-9404"
	I0601 10:43:18.325472    8624 host.go:66] Checking if "ingress-addon-legacy-20220601104200-9404" exists ...
	I0601 10:43:18.338660    8624 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}
	W0601 10:43:20.769613    8624 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}} returned with exit code 1
	I0601 10:43:20.769662    8624 cli_runner.go:217] Completed: docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: (2.4308143s)
	W0601 10:43:20.769662    8624 host.go:54] host status for "ingress-addon-legacy-20220601104200-9404" returned error: state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404
	W0601 10:43:20.769662    8624 addons.go:202] "ingress-addon-legacy-20220601104200-9404" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0601 10:43:20.769662    8624 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220601104200-9404"
	I0601 10:43:20.773103    8624 out.go:177] * Verifying ingress addon...
	W0601 10:43:20.775536    8624 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:43:20.778225    8624 out.go:177] 
	W0601 10:43:20.780739    8624 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601104200-9404" does not exist: client config: context "ingress-addon-legacy-20220601104200-9404" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601104200-9404" does not exist: client config: context "ingress-addon-legacy-20220601104200-9404" does not exist]
	W0601 10:43:20.780739    8624 out.go:239] * 
	* 
	W0601 10:43:21.035426    8624 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 10:43:21.039549    8624 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601104200-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220601104200-9404: exit status 1 (1.1179841s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220601104200-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220601104200-9404 -n ingress-addon-legacy-20220601104200-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220601104200-9404 -n ingress-addon-legacy-20220601104200-9404: exit status 7 (2.7850659s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:43:24.964332    3184 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601104200-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (7.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (3.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601104200-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220601104200-9404: exit status 1 (1.0998553s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220601104200-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220601104200-9404 -n ingress-addon-legacy-20220601104200-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220601104200-9404 -n ingress-addon-legacy-20220601104200-9404: exit status 7 (2.8126784s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:43:31.704829    6780 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220601104200-9404": docker container inspect ingress-addon-legacy-20220601104200-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220601104200-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601104200-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (3.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220601104339-9404 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-20220601104339-9404 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: exit status 60 (1m14.0619302s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ceb0767f-2379-4b59-88be-7102f3d22705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20220601104339-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"72a0353d-de36-46f1-bce7-dc313a5d2ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"97c53a33-d69d-4a52-bdaf-ac2a84129d92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b96c3f10-4b37-4d75-b59b-8e6877dd9858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"a2f576e6-d40c-4f46-9840-395137fcfe5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dcd23eee-e08b-41a0-978f-e57f8c76511f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3bff2f7-27e1-40f9-9ea1-0ea6db725e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"2b040d57-e9cb-4054-82d9-45587fd36759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20220601104339-9404 in cluster json-output-20220601104339-9404","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"03be158c-76f6-4aba-bf4e-8b21208b8b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5be5c288-3824-4fc1-b08c-cc3ef31e26cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"63629566-345f-483a-815b-9ce32930d6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network a00603e7facc0710bb8448dbcef0f368b8c3c82f95b28e22741573c135325aed (br-a00603e7facc): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"52068fbc-2cb3-475b-bc1e-506defee413a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"}}
	{"specversion":"1.0","id":"0c8f10d6-60fe-4dea-9db5-1639090ff8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20220601104339-9404\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"318057eb-bba0-48c0-9adb-ff4fd46d24ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7d38239-c0e9-4771-bc94-7cca9d559a02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 22fba39651330eb3f7a0356952c96590de9eae8a54637af6895fe395e759f0f6 (br-22fba3965133): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"7023a91d-4ed3-44d3-8c6a-c39772021a47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20220601104339-9404\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"}}
	{"specversion":"1.0","id":"2c4ec9f0-a06a-4a1b-ba95-c1fb9976479b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Restart Docker","exitcode":"60","issues":"https://github.com/kubernetes/minikube/issues/6825","message":"Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system","name":"PR_DOCKER_READONLY_VOL","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:43:53.893385    8736 network_create.go:104] error while trying to create docker network json-output-20220601104339-9404 192.168.49.0/24: create docker network json-output-20220601104339-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a00603e7facc0710bb8448dbcef0f368b8c3c82f95b28e22741573c135325aed (br-a00603e7facc): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	E0601 10:44:40.367212    8736 network_create.go:104] error while trying to create docker network json-output-20220601104339-9404 192.168.58.0/24: create docker network json-output-20220601104339-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 22fba39651330eb3f7a0356952c96590de9eae8a54637af6895fe395e759f0f6 (br-22fba3965133): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-20220601104339-9404 --output=json --user=testUser --memory=2200 --wait=true --driver=docker": exit status 60
--- FAIL: TestJSONOutput/start/Command (74.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20220601104339-9404" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ceb0767f-2379-4b59-88be-7102f3d22705
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220601104339-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 72a0353d-de36-46f1-bce7-dc313a5d2ab3
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 97c53a33-d69d-4a52-bdaf-ac2a84129d92
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b96c3f10-4b37-4d75-b59b-8e6877dd9858
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14079"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a2f576e6-d40c-4f46-9840-395137fcfe5c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dcd23eee-e08b-41a0-978f-e57f8c76511f
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a3bff2f7-27e1-40f9-9ea1-0ea6db725e05
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2b040d57-e9cb-4054-82d9-45587fd36759
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220601104339-9404 in cluster json-output-20220601104339-9404",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 03be158c-76f6-4aba-bf4e-8b21208b8b88
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5be5c288-3824-4fc1-b08c-cc3ef31e26cc
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 63629566-345f-483a-815b-9ce32930d6a3
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network a00603e7facc0710bb8448dbcef0f368b8c3c82f95b28e22741573c135325aed (br-a00603e7facc): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 52068fbc-2cb3-475b-bc1e-506defee413a
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0c8f10d6-60fe-4dea-9db5-1639090ff8bd
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220601104339-9404\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 318057eb-bba0-48c0-9adb-ff4fd46d24ab
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: f7d38239-c0e9-4771-bc94-7cca9d559a02
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 22fba39651330eb3f7a0356952c96590de9eae8a54637af6895fe395e759f0f6 (br-22fba3965133): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7023a91d-4ed3-44d3-8c6a-c39772021a47
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220601104339-9404\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2c4ec9f0-a06a-4a1b-ba95-c1fb9976479b
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ceb0767f-2379-4b59-88be-7102f3d22705
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220601104339-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 72a0353d-de36-46f1-bce7-dc313a5d2ab3
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 97c53a33-d69d-4a52-bdaf-ac2a84129d92
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b96c3f10-4b37-4d75-b59b-8e6877dd9858
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14079"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a2f576e6-d40c-4f46-9840-395137fcfe5c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dcd23eee-e08b-41a0-978f-e57f8c76511f
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a3bff2f7-27e1-40f9-9ea1-0ea6db725e05
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with the root privilege"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2b040d57-e9cb-4054-82d9-45587fd36759
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220601104339-9404 in cluster json-output-20220601104339-9404",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 03be158c-76f6-4aba-bf4e-8b21208b8b88
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5be5c288-3824-4fc1-b08c-cc3ef31e26cc
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 63629566-345f-483a-815b-9ce32930d6a3
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network a00603e7facc0710bb8448dbcef0f368b8c3c82f95b28e22741573c135325aed (br-a00603e7facc): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 52068fbc-2cb3-475b-bc1e-506defee413a
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0c8f10d6-60fe-4dea-9db5-1639090ff8bd
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220601104339-9404\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 318057eb-bba0-48c0-9adb-ff4fd46d24ab
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: f7d38239-c0e9-4771-bc94-7cca9d559a02
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220601104339-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true json-output-20220601104339-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 22fba39651330eb3f7a0356952c96590de9eae8a54637af6895fe395e759f0f6 (br-22fba3965133): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7023a91d-4ed3-44d3-8c6a-c39772021a47
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220601104339-9404\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2c4ec9f0-a06a-4a1b-ba95-c1fb9976479b
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220601104339-9404 container: docker volume create json-output-20220601104339-9404 --label name.minikube.sigs.k8s.io=json-output-20220601104339-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220601104339-9404: error while creating volume root path '/var/lib/docker/volumes/json-output-20220601104339-9404': mkdir /var/lib/docker/volumes/json-output-20220601104339-9404: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.1s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220601104339-9404 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-20220601104339-9404 --output=json --user=testUser: exit status 80 (3.100646s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81037b67-ed91-4429-b22c-456974625641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20220601104339-9404\": docker container inspect json-output-20220601104339-9404 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220601104339-9404","name":"GUEST_STATUS","url":""}}
	{"specversion":"1.0","id":"f611504f-3972-4795-a5c1-fb9f7defc224","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    Please also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-20220601104339-9404 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (3.10s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.07s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220601104339-9404 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-20220601104339-9404 --output=json --user=testUser: exit status 80 (3.067542s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20220601104339-9404": docker container inspect json-output-20220601104339-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220601104339-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_9.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-20220601104339-9404 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (3.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (22.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220601104339-9404 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p json-output-20220601104339-9404 --output=json --user=testUser: exit status 82 (22.0616575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd7e8f7b-4ec0-469f-be71-42e73809e115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"ab96268e-32cc-46de-b2ad-441c5c59033b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"ac475c25-f7ce-4e43-a4ca-20483be013dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"64d1f600-d3c9-495d-8889-ac1130f68de0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"3a37124f-ea35-494f-83fb-870a966a2c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"752058cb-b4d9-47a8-bd8f-71e9b3666282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220601104339-9404\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"18f46ead-e20e-4b31-b431-53e54d7ccc17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20220601104339-9404 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220601104339-9404","name":"GUEST_STOP_TIMEOUT","url":""}}
	{"specversion":"1.0","id":"9e4ea7a4-d76f-40d7-88bc-c5b40efca41b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please also attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:45:05.210363    9200 daemonize_windows.go:38] error terminating scheduled stop for profile json-output-20220601104339-9404: stopping schedule-stop service for profile json-output-20220601104339-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "json-output-20220601104339-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" json-output-20220601104339-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220601104339-9404

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe stop -p json-output-20220601104339-9404 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (22.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20220601104339-9404"  ...
Cannot use for:
Stopping node "json-output-20220601104339-9404"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fd7e8f7b-4ec0-469f-be71-42e73809e115
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ab96268e-32cc-46de-b2ad-441c5c59033b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ac475c25-f7ce-4e43-a4ca-20483be013dc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 64d1f600-d3c9-495d-8889-ac1130f68de0
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a37124f-ea35-494f-83fb-870a966a2c4b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 752058cb-b4d9-47a8-bd8f-71e9b3666282
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 18f46ead-e20e-4b31-b431-53e54d7ccc17
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220601104339-9404 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220601104339-9404",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9e4ea7a4-d76f-40d7-88bc-c5b40efca41b
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fd7e8f7b-4ec0-469f-be71-42e73809e115
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ab96268e-32cc-46de-b2ad-441c5c59033b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ac475c25-f7ce-4e43-a4ca-20483be013dc
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 64d1f600-d3c9-495d-8889-ac1130f68de0
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a37124f-ea35-494f-83fb-870a966a2c4b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 752058cb-b4d9-47a8-bd8f-71e9b3666282
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220601104339-9404\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 18f46ead-e20e-4b31-b431-53e54d7ccc17
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220601104339-9404 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220601104339-9404",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9e4ea7a4-d76f-40d7-88bc-c5b40efca41b
datacontenttype: application/json
Data,
{
"message": "╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│                                                                                                                     │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please al
so attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (241.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220601104537-9404 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220601104537-9404 --network=: (3m19.8917514s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0198691s)
kic_custom_network_test.go:127: docker-network-20220601104537-9404 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20220601104537-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220601104537-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220601104537-9404: (40.3256585s)
--- FAIL: TestKicCustomNetwork/create_custom_network (241.25s)

                                                
                                    
x
+
TestKicExistingNetwork (4.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0601 10:53:31.759679    9404 network_create.go:104] error while trying to create docker network existing-network 192.168.49.0/24: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 3e42a54de02253c8538cbb4e2dd4c3339709494582df51dcee6abad6cc34d2f0 (br-3e42a54de022): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
kic_custom_network_test.go:78: error creating network: un-retryable: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 3e42a54de02253c8538cbb4e2dd4c3339709494582df51dcee6abad6cc34d2f0 (br-3e42a54de022): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
--- FAIL: TestKicExistingNetwork (4.12s)

                                                
                                    
x
+
TestKicCustomSubnet (236.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220601105331-9404 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220601105331-9404 --subnet=192.168.60.0/24: (3m16.9970915s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220601105331-9404 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Non-zero exit: docker network inspect custom-subnet-20220601105331-9404 --format "{{(index .IPAM.Config 0).Subnet}}": exit status 1 (1.0338354s)

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220601105331-9404

                                                
                                                
** /stderr **
kic_custom_network_test.go:135: docker network inspect custom-subnet-20220601105331-9404 --format "{{(index .IPAM.Config 0).Subnet}}" failed: exit status 1

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220601105331-9404

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "custom-subnet-20220601105331-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220601105331-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220601105331-9404: (38.1520302s)
--- FAIL: TestKicCustomSubnet (236.19s)

                                                
                                    
x
+
TestMinikubeProfile (94.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220601105728-9404 --driver=docker
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p first-20220601105728-9404 --driver=docker: exit status 60 (1m14.440587s)

                                                
                                                
-- stdout --
	* [first-20220601105728-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node first-20220601105728-9404 in cluster first-20220601105728-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "first-20220601105728-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:57:42.778747    7968 network_create.go:104] error while trying to create docker network first-20220601105728-9404 192.168.49.0/24: create docker network first-20220601105728-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220601105728-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e77307239b47f960f502acc1da2c4a12f176358b4329f9d3e09772c720bb7a34 (br-e77307239b47): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220601105728-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220601105728-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e77307239b47f960f502acc1da2c4a12f176358b4329f9d3e09772c720bb7a34 (br-e77307239b47): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for first-20220601105728-9404 container: docker volume create first-20220601105728-9404 --label name.minikube.sigs.k8s.io=first-20220601105728-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220601105728-9404: error while creating volume root path '/var/lib/docker/volumes/first-20220601105728-9404': mkdir /var/lib/docker/volumes/first-20220601105728-9404: read-only file system
	
	E0601 10:58:29.213771    7968 network_create.go:104] error while trying to create docker network first-20220601105728-9404 192.168.58.0/24: create docker network first-20220601105728-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220601105728-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9eb54aa25e9695c59efca8522554eca1175fd6d5d7750609a4332c5aea5c04ab (br-9eb54aa25e96): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220601105728-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true first-20220601105728-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9eb54aa25e9695c59efca8522554eca1175fd6d5d7750609a4332c5aea5c04ab (br-9eb54aa25e96): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p first-20220601105728-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for first-20220601105728-9404 container: docker volume create first-20220601105728-9404 --label name.minikube.sigs.k8s.io=first-20220601105728-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220601105728-9404: error while creating volume root path '/var/lib/docker/volumes/first-20220601105728-9404': mkdir /var/lib/docker/volumes/first-20220601105728-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for first-20220601105728-9404 container: docker volume create first-20220601105728-9404 --label name.minikube.sigs.k8s.io=first-20220601105728-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220601105728-9404: error while creating volume root path '/var/lib/docker/volumes/first-20220601105728-9404': mkdir /var/lib/docker/volumes/first-20220601105728-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-windows-amd64.exe start -p first-20220601105728-9404 --driver=docker": exit status 60
panic.go:482: *** TestMinikubeProfile FAILED at 2022-06-01 10:58:42.7168656 +0000 GMT m=+2132.992476401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect second-20220601105728-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect second-20220601105728-9404: exit status 1 (1.1020648s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-20220601105728-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220601105728-9404 -n second-20220601105728-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220601105728-9404 -n second-20220601105728-9404: exit status 85 (328.4237ms)

                                                
                                                
-- stdout --
	* Profile "second-20220601105728-9404" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-20220601105728-9404"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-20220601105728-9404" host is not running, skipping log retrieval (state="* Profile \"second-20220601105728-9404\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-20220601105728-9404\"")
helpers_test.go:175: Cleaning up "second-20220601105728-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220601105728-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220601105728-9404: (7.0166403s)
panic.go:482: *** TestMinikubeProfile FAILED at 2022-06-01 10:58:51.1725514 +0000 GMT m=+2141.448067501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect first-20220601105728-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect first-20220601105728-9404: exit status 1 (1.0870402s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: first-20220601105728-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220601105728-9404 -n first-20220601105728-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220601105728-9404 -n first-20220601105728-9404: exit status 7 (2.7986127s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:58:55.036050    3800 status.go:247] status error: host: state: unknown state "first-20220601105728-9404": docker container inspect first-20220601105728-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: first-20220601105728-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-20220601105728-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "first-20220601105728-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220601105728-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220601105728-9404: (8.1480351s)
--- FAIL: TestMinikubeProfile (94.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (78.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220601105903-9404 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-1-20220601105903-9404 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: exit status 60 (1m14.6865581s)

                                                
                                                
-- stdout --
	* [mount-start-1-20220601105903-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes mount-start-1-20220601105903-9404 in cluster mount-start-1-20220601105903-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20220601105903-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:59:17.726642    7956 network_create.go:104] error while trying to create docker network mount-start-1-20220601105903-9404 192.168.49.0/24: create docker network mount-start-1-20220601105903-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220601105903-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9a26bf7f88960437df9f6964c3c4ae0fcadcd18dddb045146bef959b77e53654 (br-9a26bf7f8896): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220601105903-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220601105903-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9a26bf7f88960437df9f6964c3c4ae0fcadcd18dddb045146bef959b77e53654 (br-9a26bf7f8896): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220601105903-9404 container: docker volume create mount-start-1-20220601105903-9404 --label name.minikube.sigs.k8s.io=mount-start-1-20220601105903-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220601105903-9404: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220601105903-9404': mkdir /var/lib/docker/volumes/mount-start-1-20220601105903-9404: read-only file system
	
	E0601 11:00:04.370665    7956 network_create.go:104] error while trying to create docker network mount-start-1-20220601105903-9404 192.168.58.0/24: create docker network mount-start-1-20220601105903-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220601105903-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0739828de5a9ae0774edfcf3c361cf8475dcf14fb81da1d8fb6782eb0dd5c50 (br-f0739828de5a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220601105903-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true mount-start-1-20220601105903-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0739828de5a9ae0774edfcf3c361cf8475dcf14fb81da1d8fb6782eb0dd5c50 (br-f0739828de5a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20220601105903-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220601105903-9404 container: docker volume create mount-start-1-20220601105903-9404 --label name.minikube.sigs.k8s.io=mount-start-1-20220601105903-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220601105903-9404: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220601105903-9404': mkdir /var/lib/docker/volumes/mount-start-1-20220601105903-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220601105903-9404 container: docker volume create mount-start-1-20220601105903-9404 --label name.minikube.sigs.k8s.io=mount-start-1-20220601105903-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220601105903-9404: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220601105903-9404': mkdir /var/lib/docker/volumes/mount-start-1-20220601105903-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-1-20220601105903-9404 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20220601105903-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect mount-start-1-20220601105903-9404: exit status 1 (1.0989007s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: mount-start-1-20220601105903-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220601105903-9404 -n mount-start-1-20220601105903-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220601105903-9404 -n mount-start-1-20220601105903-9404: exit status 7 (2.7890868s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:00:21.770073    6964 status.go:247] status error: host: state: unknown state "mount-start-1-20220601105903-9404": docker container inspect mount-start-1-20220601105903-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20220601105903-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20220601105903-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (78.58s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: exit status 60 (1m14.196265s)

                                                
                                                
-- stdout --
	* [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220601110036-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:00:37.093292    5892 out.go:296] Setting OutFile to fd 704 ...
	I0601 11:00:37.146292    5892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:00:37.146292    5892 out.go:309] Setting ErrFile to fd 800...
	I0601 11:00:37.146292    5892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:00:37.159287    5892 out.go:303] Setting JSON to false
	I0601 11:00:37.162282    5892 start.go:115] hostinfo: {"hostname":"minikube2","uptime":13172,"bootTime":1654068065,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:00:37.162282    5892 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:00:37.169286    5892 out.go:177] * [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:00:37.173293    5892 notify.go:193] Checking for updates...
	I0601 11:00:37.176285    5892 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:00:37.178301    5892 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:00:37.181281    5892 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:00:37.184303    5892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:00:37.186289    5892 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:00:39.787812    5892 docker.go:137] docker version: linux-20.10.14
	I0601 11:00:39.795779    5892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:00:41.816934    5892 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.021132s)
	I0601 11:00:41.816934    5892 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:00:40.7892907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:00:41.821696    5892 out.go:177] * Using the docker driver based on user configuration
	I0601 11:00:41.824820    5892 start.go:284] selected driver: docker
	I0601 11:00:41.824820    5892 start.go:806] validating driver "docker" against <nil>
	I0601 11:00:41.824820    5892 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:00:41.945601    5892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:00:43.929365    5892 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9837412s)
	I0601 11:00:43.929365    5892 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:00:42.9310041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:00:43.930115    5892 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:00:43.930337    5892 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:00:43.933649    5892 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:00:43.936310    5892 cni.go:95] Creating CNI manager for ""
	I0601 11:00:43.936310    5892 cni.go:156] 0 nodes found, recommending kindnet
	I0601 11:00:43.936437    5892 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:00:43.936437    5892 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:00:43.936622    5892 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:00:43.936622    5892 start_flags.go:306] config:
	{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:00:43.940090    5892 out.go:177] * Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	I0601 11:00:43.943119    5892 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:00:43.945726    5892 out.go:177] * Pulling base image ...
	I0601 11:00:43.948462    5892 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:00:43.948462    5892 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:00:43.948462    5892 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:00:43.948462    5892 cache.go:57] Caching tarball of preloaded images
	I0601 11:00:43.948462    5892 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:00:43.948462    5892 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:00:43.949449    5892 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220601110036-9404\config.json ...
	I0601 11:00:43.949449    5892 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220601110036-9404\config.json: {Name:mkcbd6a9a8572e5db045bd9c3c31374052db9334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:00:44.973039    5892 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:00:44.973039    5892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:00:44.973039    5892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:00:44.973585    5892 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:00:44.973771    5892 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:00:44.973771    5892 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:00:44.973846    5892 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:00:44.973846    5892 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:00:44.973846    5892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:00:47.250995    5892 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-446320271: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-446320271: read-only file system"}
	I0601 11:00:47.251061    5892 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:00:47.251131    5892 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:00:47.251217    5892 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:00:47.251473    5892 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 255.3µs
	I0601 11:00:47.251473    5892 start.go:91] Provisioning new machine with config: &{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:00:47.251473    5892 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:00:47.255839    5892 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:00:47.256641    5892 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:00:47.256641    5892 client.go:168] LocalClient.Create starting
	I0601 11:00:47.257339    5892 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:00:47.257911    5892 main.go:134] libmachine: Decoding PEM data...
	I0601 11:00:47.257997    5892 main.go:134] libmachine: Parsing certificate...
	I0601 11:00:47.257997    5892 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:00:47.257997    5892 main.go:134] libmachine: Decoding PEM data...
	I0601 11:00:47.257997    5892 main.go:134] libmachine: Parsing certificate...
	I0601 11:00:47.269376    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:00:48.286351    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:00:48.286351    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0167918s)
	I0601 11:00:48.293897    5892 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:00:48.293897    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:00:49.314456    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:00:49.314456    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0204492s)
	I0601 11:00:49.314456    5892 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:00:49.314456    5892 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:00:49.322224    5892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:00:50.347052    5892 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0248167s)
	I0601 11:00:50.368108    5892 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001182b8] misses:0}
	I0601 11:00:50.368171    5892 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:00:50.368171    5892 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:00:50.375716    5892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:00:51.374992    5892 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	E0601 11:00:51.374992    5892 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.49.0/24: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 800826d1f29cd68565e2fe7a15121756dd496c880764a89c3aaed96920ee84f6 (br-800826d1f29c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:00:51.374992    5892 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 800826d1f29cd68565e2fe7a15121756dd496c880764a89c3aaed96920ee84f6 (br-800826d1f29c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 800826d1f29cd68565e2fe7a15121756dd496c880764a89c3aaed96920ee84f6 (br-800826d1f29c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:00:51.392497    5892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:00:52.452890    5892 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0602105s)
	I0601 11:00:52.459647    5892 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:00:53.470367    5892 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:00:53.470367    5892 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0107084s)
	I0601 11:00:53.470367    5892 client.go:171] LocalClient.Create took 6.2136561s
	I0601 11:00:55.491854    5892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:00:55.497502    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:00:56.527833    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:00:56.527833    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.030258s)
	I0601 11:00:56.527833    5892 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:00:56.819984    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:00:57.831592    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:00:57.831684    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.01116s)
	W0601 11:00:57.831684    5892 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:00:57.831684    5892 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:00:57.844793    5892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:00:57.852503    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:00:58.897154    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:00:58.897154    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0446394s)
	I0601 11:00:58.897154    5892 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:00:59.204577    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:00.255335    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:00.255422    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.050559s)
	W0601 11:01:00.255654    5892 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:01:00.255654    5892 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:00.255798    5892 start.go:134] duration metric: createHost completed in 13.0040341s
	I0601 11:01:00.255798    5892 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 13.0041785s
	W0601 11:01:00.255972    5892 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	I0601 11:01:00.269638    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:01.270594    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:01.270594    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0007583s)
	I0601 11:01:01.270831    5892 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	W0601 11:01:01.270973    5892 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:01:01.270973    5892 start.go:614] Will try again in 5 seconds ...
	I0601 11:01:06.274965    5892 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:01:06.275380    5892 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 177.8µs
	I0601 11:01:06.275533    5892 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:01:06.275623    5892 fix.go:55] fixHost starting: 
	I0601 11:01:06.289710    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:07.340504    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:07.340681    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0506669s)
	I0601 11:01:07.340681    5892 fix.go:103] recreateIfNeeded on multinode-20220601110036-9404: state= err=unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:07.340681    5892 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:01:07.344552    5892 out.go:177] * docker "multinode-20220601110036-9404" container is missing, will recreate.
	I0601 11:01:07.347831    5892 delete.go:124] DEMOLISHING multinode-20220601110036-9404 ...
	I0601 11:01:07.359573    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:08.363234    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:08.363234    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0036498s)
	W0601 11:01:08.363234    5892 stop.go:75] unable to get state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:08.363234    5892 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:08.376107    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:09.389145    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:09.389286    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0122599s)
	I0601 11:01:09.389286    5892 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:09.395096    5892 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:01:10.414965    5892 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:10.415116    5892 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0198332s)
	I0601 11:01:10.415188    5892 kic.go:356] could not find the container multinode-20220601110036-9404 to remove it. will try anyways
	I0601 11:01:10.422264    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:11.441090    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:11.441090    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0182762s)
	W0601 11:01:11.441090    5892 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:11.449444    5892 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0"
	W0601 11:01:12.451359    5892 cli_runner.go:211] docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:01:12.451359    5892 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": (1.0019036s)
	I0601 11:01:12.451359    5892 oci.go:625] error shutdown multinode-20220601110036-9404: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:13.471134    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:14.469877    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:14.470200    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:14.470200    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:14.470200    5892 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:14.953592    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:15.975661    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:15.975735    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0218616s)
	I0601 11:01:15.975820    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:15.975937    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:15.975969    5892 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:16.881672    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:17.871071    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:17.871154    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:17.871154    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:17.871299    5892 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:18.528347    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:19.566838    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:19.566941    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0384793s)
	I0601 11:01:19.566941    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:19.566941    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:19.567102    5892 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:20.696369    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:21.720325    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:21.720325    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0237609s)
	I0601 11:01:21.720467    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:21.720467    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:21.720467    5892 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:23.255043    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:24.262742    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:24.262742    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0076875s)
	I0601 11:01:24.262742    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:24.262742    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:24.262742    5892 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:27.316656    5892 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:01:28.354966    5892 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:01:28.355120    5892 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0381083s)
	I0601 11:01:28.355218    5892 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:28.355263    5892 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:01:28.355331    5892 oci.go:88] couldn't shut down multinode-20220601110036-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	 
	I0601 11:01:28.362267    5892 cli_runner.go:164] Run: docker rm -f -v multinode-20220601110036-9404
	I0601 11:01:29.382175    5892 cli_runner.go:217] Completed: docker rm -f -v multinode-20220601110036-9404: (1.0198643s)
	I0601 11:01:29.388568    5892 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:01:30.401839    5892 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:30.402047    5892 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0132597s)
	I0601 11:01:30.410911    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:01:31.432271    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:01:31.432271    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.021083s)
	I0601 11:01:31.438916    5892 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:01:31.438916    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:01:32.484013    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:32.484013    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0450851s)
	I0601 11:01:32.484013    5892 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:01:32.484013    5892 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	W0601 11:01:32.485441    5892 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:01:32.485441    5892 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:01:33.498527    5892 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:01:33.506411    5892 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:01:33.506411    5892 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:01:33.506411    5892 client.go:168] LocalClient.Create starting
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Decoding PEM data...
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Parsing certificate...
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Decoding PEM data...
	I0601 11:01:33.507223    5892 main.go:134] libmachine: Parsing certificate...
	I0601 11:01:33.515474    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:01:34.533555    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:01:34.533555    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0180696s)
	I0601 11:01:34.540680    5892 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:01:34.540680    5892 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:01:35.550249    5892 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:35.550249    5892 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.009558s)
	I0601 11:01:35.550249    5892 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:01:35.550249    5892 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:01:35.557470    5892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:01:36.576411    5892 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0188462s)
	I0601 11:01:36.592634    5892 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001182b8] amended:false}} dirty:map[] misses:0}
	I0601 11:01:36.592634    5892 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:01:36.608115    5892 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001182b8] amended:true}} dirty:map[192.168.49.0:0xc0001182b8 192.168.58.0:0xc0005c6fb8] misses:0}
	I0601 11:01:36.608115    5892 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:01:36.608115    5892 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:01:36.614879    5892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:01:37.638439    5892 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:37.638439    5892 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: (1.0235483s)
	E0601 11:01:37.638439    5892 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.58.0/24: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9f90efee1e41bbff490db45e4dc0e3db2ed60c0f2ee343e151217e47a2e098b4 (br-9f90efee1e41): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:01:37.638439    5892 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9f90efee1e41bbff490db45e4dc0e3db2ed60c0f2ee343e151217e47a2e098b4 (br-9f90efee1e41): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9f90efee1e41bbff490db45e4dc0e3db2ed60c0f2ee343e151217e47a2e098b4 (br-9f90efee1e41): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:01:37.652482    5892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:01:38.667202    5892 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0144783s)
	I0601 11:01:38.674646    5892 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:01:39.712496    5892 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:01:39.712496    5892 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0374916s)
	I0601 11:01:39.712573    5892 client.go:171] LocalClient.Create took 6.2060921s
	I0601 11:01:41.740335    5892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:01:41.747301    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:42.743421    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:42.743421    5892 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:43.094598    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:44.123055    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:44.123080    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0281158s)
	W0601 11:01:44.123080    5892 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:01:44.123080    5892 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:44.134433    5892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:01:44.142032    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:45.184213    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:45.184315    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0416393s)
	I0601 11:01:45.184552    5892 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:45.427583    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:46.436622    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:46.436694    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0088287s)
	W0601 11:01:46.436855    5892 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:01:46.436894    5892 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:46.436894    5892 start.go:134] duration metric: createHost completed in 12.9380228s
	I0601 11:01:46.445453    5892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:01:46.451772    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:47.457748    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:47.457748    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0059646s)
	I0601 11:01:47.457748    5892 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:47.713076    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:48.732278    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:48.732278    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0191901s)
	W0601 11:01:48.732278    5892 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:01:48.732278    5892 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:48.741278    5892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:01:48.746290    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:49.762910    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:49.762910    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0166085s)
	I0601 11:01:49.762910    5892 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:49.981937    5892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:01:50.999331    5892 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:01:50.999331    5892 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0173302s)
	W0601 11:01:50.999331    5892 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:01:50.999331    5892 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:01:50.999331    5892 fix.go:57] fixHost completed within 44.7232929s
	I0601 11:01:50.999331    5892 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 44.7234456s
	W0601 11:01:51.000271    5892 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:01:51.006131    5892 out.go:177] 
	W0601 11:01:51.008509    5892 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	W0601 11:01:51.008509    5892 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:01:51.008509    5892 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:01:51.012007    5892 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:85: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1144889s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.780004s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:01:55.009094    4176 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (78.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (1.8817937s)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20220601110036-9404" does not exist

                                                
                                                
** /stderr **
multinode_test.go:481: failed to create busybox deployment to multinode cluster
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- rollout status deployment/busybox: exit status 1 (1.8268836s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:486: failed to deploy busybox to multinode cluster
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (1.8674236s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:492: failed to retrieve Pod IPs
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.9105275s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:504: failed get Pod names
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.io: exit status 1 (1.9398696s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:512: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.default: exit status 1 (1.9082586s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:522: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1.8518918s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:530: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1002488s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.8081351s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:12.114016    7084 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (17.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220601110036-9404 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.890238s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220601110036-9404"

                                                
                                                
** /stderr **
multinode_test.go:540: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0901607s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.8165657s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:17.920254   10060 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220601110036-9404 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220601110036-9404 -v 3 --alsologtostderr: exit status 80 (3.0390094s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:02:18.181232    7048 out.go:296] Setting OutFile to fd 596 ...
	I0601 11:02:18.244581    7048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:18.244581    7048 out.go:309] Setting ErrFile to fd 820...
	I0601 11:02:18.244581    7048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:18.257492    7048 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:02:18.258303    7048 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:02:18.272277    7048 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:02:20.691259    7048 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:02:20.691329    7048 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (2.4181532s)
	I0601 11:02:20.695624    7048 out.go:177] 
	W0601 11:02:20.698011    7048 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:02:20.698011    7048 out.go:239] * 
	* 
	W0601 11:02:20.951574    7048 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:02:20.955577    7048 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:110: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-20220601110036-9404 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0986127s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7599314s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:24.827088    5756 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.7943549s)
multinode_test.go:153: expected profile "multinode-20220601110036-9404" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20220601110036-9404\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20220601110036-9404\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOp
t\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.23.6\",\"ClusterName\":\"multinode-20220601110036-9404\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName
\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.23.6\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube2:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false},\"Active\":false}]}"*. args: "out/minikube-windows-amd64.exe profile lis
t --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0851888s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7819876s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:32.497467    1668 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --output json --alsologtostderr: exit status 7 (2.8036764s)

                                                
                                                
-- stdout --
	{"Name":"multinode-20220601110036-9404","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:02:32.770758    8900 out.go:296] Setting OutFile to fd 900 ...
	I0601 11:02:32.831023    8900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:32.831128    8900 out.go:309] Setting ErrFile to fd 664...
	I0601 11:02:32.831128    8900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:32.842744    8900 out.go:303] Setting JSON to true
	I0601 11:02:32.842744    8900 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:02:32.843749    8900 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:02:32.843749    8900 status.go:253] checking status of multinode-20220601110036-9404 ...
	I0601 11:02:32.857751    8900 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:02:35.301285    8900 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:02:35.301285    8900 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (2.4435057s)
	I0601 11:02:35.301285    8900 status.go:328] multinode-20220601110036-9404 host status = "" (err=state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	)
	I0601 11:02:35.301285    8900 status.go:255] multinode-20220601110036-9404 status: &{Name:multinode-20220601110036-9404 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0601 11:02:35.301285    8900 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:02:35.301285    8900 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:178: failed to decode json from status: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0831855s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7551708s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:39.147564    7172 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (6.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node stop m03
multinode_test.go:208: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node stop m03: exit status 85 (598.4059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_a721422985a44b3996d93fcfe1a29c6759a29372_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:210: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node stop m03": exit status 85
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status: exit status 7 (2.8026511s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:42.551462    7512 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:02:42.551462    7512 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr: exit status 7 (2.7645851s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:02:42.808264    9884 out.go:296] Setting OutFile to fd 816 ...
	I0601 11:02:42.862777    9884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:42.862777    9884 out.go:309] Setting ErrFile to fd 660...
	I0601 11:02:42.862777    9884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:42.872097    9884 out.go:303] Setting JSON to false
	I0601 11:02:42.873086    9884 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:02:42.873237    9884 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:02:42.873237    9884 status.go:253] checking status of multinode-20220601110036-9404 ...
	I0601 11:02:42.885408    9884 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:02:45.316395    9884 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:02:45.316468    9884 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (2.4308095s)
	I0601 11:02:45.316589    9884 status.go:328] multinode-20220601110036-9404 host status = "" (err=state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	)
	I0601 11:02:45.316617    9884 status.go:255] multinode-20220601110036-9404 status: &{Name:multinode-20220601110036-9404 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0601 11:02:45.316702    9884 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:02:45.316702    9884 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:227: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr": multinode-20220601110036-9404
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:231: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr": multinode-20220601110036-9404
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:235: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr": multinode-20220601110036-9404
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1035218s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7311979s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:49.158841    9032 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1107979s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node start m03 --alsologtostderr: exit status 85 (587.5366ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:02:50.534247    4620 out.go:296] Setting OutFile to fd 264 ...
	I0601 11:02:50.596103    4620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:50.596103    4620 out.go:309] Setting ErrFile to fd 696...
	I0601 11:02:50.596103    4620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:02:50.612731    4620 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:02:50.613427    4620 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:02:50.617540    4620 out.go:177] 
	W0601 11:02:50.620255    4620 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W0601 11:02:50.620255    4620 out.go:239] * 
	* 
	W0601 11:02:50.864280    4620 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:02:50.867405    4620 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0601 11:02:50.534247    4620 out.go:296] Setting OutFile to fd 264 ...
I0601 11:02:50.596103    4620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 11:02:50.596103    4620 out.go:309] Setting ErrFile to fd 696...
I0601 11:02:50.596103    4620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0601 11:02:50.612731    4620 mustload.go:65] Loading cluster: multinode-20220601110036-9404
I0601 11:02:50.613427    4620 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0601 11:02:50.617540    4620 out.go:177] 
W0601 11:02:50.620255    4620 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W0601 11:02:50.620255    4620 out.go:239] * 
* 
W0601 11:02:50.864280    4620 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_17.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0601 11:02:50.867405    4620 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node start m03 --alsologtostderr": exit status 85
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status: exit status 7 (2.8960393s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:53.764853    3280 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:02:53.764921    3280 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1027745s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7508546s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:02:57.624671    9888 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (8.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (136.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601110036-9404
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220601110036-9404
multinode_test.go:288: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p multinode-20220601110036-9404: exit status 82 (22.1759962s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:03:03.228678    9516 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220601110036-9404: stopping schedule-stop service for profile multinode-20220601110036-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:290: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p multinode-20220601110036-9404" : exit status 82
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true -v=8 --alsologtostderr: exit status 60 (1m50.0246181s)

                                                
                                                
-- stdout --
	* [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	* Pulling base image ...
	* docker "multinode-20220601110036-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220601110036-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:03:20.385693    8092 out.go:296] Setting OutFile to fd 800 ...
	I0601 11:03:20.445276    8092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:20.445276    8092 out.go:309] Setting ErrFile to fd 968...
	I0601 11:03:20.445276    8092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:03:20.458167    8092 out.go:303] Setting JSON to false
	I0601 11:03:20.460312    8092 start.go:115] hostinfo: {"hostname":"minikube2","uptime":13335,"bootTime":1654068065,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:03:20.461317    8092 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:03:20.463859    8092 out.go:177] * [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:03:20.467857    8092 notify.go:193] Checking for updates...
	I0601 11:03:20.469868    8092 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:03:20.472873    8092 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:03:20.474872    8092 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:03:20.476866    8092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:03:20.479862    8092 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:03:20.479862    8092 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:03:23.066301    8092 docker.go:137] docker version: linux-20.10.14
	I0601 11:03:23.077752    8092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:25.110228    8092 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0324541s)
	I0601 11:03:25.111206    8092 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:03:24.0808201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:25.116330    8092 out.go:177] * Using the docker driver based on existing profile
	I0601 11:03:25.118167    8092 start.go:284] selected driver: docker
	I0601 11:03:25.118167    8092 start.go:806] validating driver "docker" against &{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:03:25.118167    8092 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:03:25.138021    8092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:03:27.117735    8092 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9794577s)
	I0601 11:03:27.118314    8092 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:03:26.1314328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:03:27.225775    8092 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:03:27.225775    8092 cni.go:95] Creating CNI manager for ""
	I0601 11:03:27.225775    8092 cni.go:156] 1 nodes found, recommending kindnet
	I0601 11:03:27.225775    8092 start_flags.go:306] config:
	{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false}
	I0601 11:03:27.230126    8092 out.go:177] * Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	I0601 11:03:27.232521    8092 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:03:27.235474    8092 out.go:177] * Pulling base image ...
	I0601 11:03:27.237739    8092 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:03:27.238055    8092 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:03:27.238055    8092 cache.go:57] Caching tarball of preloaded images
	I0601 11:03:27.238303    8092 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:03:27.238431    8092 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:03:27.238431    8092 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:03:27.238431    8092 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220601110036-9404\config.json ...
	I0601 11:03:28.292872    8092 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:03:28.292872    8092 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:03:28.292872    8092 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:03:28.292872    8092 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:03:28.292872    8092 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:03:28.293412    8092 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:03:28.293567    8092 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:03:28.293567    8092 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:03:28.293659    8092 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:03:30.527809    8092 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-553900706: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-553900706: read-only file system"}
	I0601 11:03:30.527809    8092 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:03:30.527809    8092 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:03:30.527809    8092 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:03:30.528365    8092 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 556.2µs
	I0601 11:03:30.528730    8092 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:03:30.528813    8092 fix.go:55] fixHost starting: 
	I0601 11:03:30.544033    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:31.555815    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:31.555815    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0117705s)
	I0601 11:03:31.555815    8092 fix.go:103] recreateIfNeeded on multinode-20220601110036-9404: state= err=unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:31.555815    8092 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:03:31.560630    8092 out.go:177] * docker "multinode-20220601110036-9404" container is missing, will recreate.
	I0601 11:03:31.563227    8092 delete.go:124] DEMOLISHING multinode-20220601110036-9404 ...
	I0601 11:03:31.582074    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:32.617650    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:32.617860    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0353449s)
	W0601 11:03:32.617962    8092 stop.go:75] unable to get state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:32.618067    8092 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:32.633059    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:33.672493    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:33.672717    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0392171s)
	I0601 11:03:33.672717    8092 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:33.681485    8092 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:03:34.737121    8092 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:03:34.737121    8092 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0554664s)
	I0601 11:03:34.737121    8092 kic.go:356] could not find the container multinode-20220601110036-9404 to remove it. will try anyways
	I0601 11:03:34.743469    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:35.778673    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:35.778673    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0341674s)
	W0601 11:03:35.778673    8092 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:35.784699    8092 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0"
	W0601 11:03:36.824509    8092 cli_runner.go:211] docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:03:36.824509    8092 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": (1.0395345s)
	I0601 11:03:36.824585    8092 oci.go:625] error shutdown multinode-20220601110036-9404: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:37.837936    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:38.884442    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:38.884442    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0464937s)
	I0601 11:03:38.884442    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:38.884442    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:38.884442    8092 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:39.450374    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:40.496905    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:40.496905    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0465188s)
	I0601 11:03:40.496905    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:40.496905    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:40.496905    8092 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:41.597548    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:42.638599    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:42.638599    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0410389s)
	I0601 11:03:42.638599    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:42.638599    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:42.638599    8092 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:43.963505    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:44.957951    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:44.958094    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:44.958181    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:44.958181    8092 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:46.548539    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:47.576651    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:47.576651    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0280997s)
	I0601 11:03:47.576651    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:47.576651    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:47.576651    8092 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:49.928905    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:50.933767    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:50.933767    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0048508s)
	I0601 11:03:50.933767    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:50.933767    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:50.933767    8092 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:55.461446    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:03:56.486178    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:03:56.486178    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0247205s)
	I0601 11:03:56.486178    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:03:56.486178    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:03:56.486178    8092 oci.go:88] couldn't shut down multinode-20220601110036-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	 
	I0601 11:03:56.494425    8092 cli_runner.go:164] Run: docker rm -f -v multinode-20220601110036-9404
	I0601 11:03:57.499966    8092 cli_runner.go:217] Completed: docker rm -f -v multinode-20220601110036-9404: (1.0055295s)
	I0601 11:03:57.506968    8092 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:03:58.528956    8092 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:03:58.528956    8092 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0219763s)
	I0601 11:03:58.537315    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:03:59.592629    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:03:59.592754    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0551468s)
	I0601 11:03:59.600565    8092 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:03:59.600565    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:04:00.621798    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:00.621798    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0212215s)
	I0601 11:04:00.621798    8092 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:04:00.621798    8092 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	W0601 11:04:00.622876    8092 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:04:00.622876    8092 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:04:01.630703    8092 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:04:01.634810    8092 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:04:01.635534    8092 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:04:01.635609    8092 client.go:168] LocalClient.Create starting
	I0601 11:04:01.636287    8092 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:04:01.636691    8092 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:01.636691    8092 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:01.636691    8092 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:04:01.636691    8092 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:01.636691    8092 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:01.646126    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:04:02.708550    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:04:02.708550    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0624115s)
	I0601 11:04:02.715549    8092 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:04:02.715549    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:04:03.787692    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:03.787868    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0719371s)
	I0601 11:04:03.787868    8092 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:04:03.787929    8092 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:04:03.796067    8092 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:04:04.864036    8092 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0679569s)
	I0601 11:04:04.881181    8092 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007144b0] misses:0}
	I0601 11:04:04.881825    8092 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:04:04.881825    8092 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:04:04.889604    8092 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:04:05.932756    8092 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:05.932756    8092 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: (1.0431403s)
	E0601 11:04:05.932756    8092 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.49.0/24: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f31e1ff319c2c58530424fbab2771bfdb43231d36d345dd28fddedb7234c6489 (br-f31e1ff319c2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:04:05.933412    8092 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f31e1ff319c2c58530424fbab2771bfdb43231d36d345dd28fddedb7234c6489 (br-f31e1ff319c2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f31e1ff319c2c58530424fbab2771bfdb43231d36d345dd28fddedb7234c6489 (br-f31e1ff319c2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:04:05.946514    8092 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:04:07.014307    8092 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0676341s)
	I0601 11:04:07.020878    8092 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:04:08.050493    8092 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:04:08.050493    8092 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0296031s)
	I0601 11:04:08.050493    8092 client.go:171] LocalClient.Create took 6.4148113s
	I0601 11:04:10.074497    8092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:04:10.080493    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:11.121217    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:11.121441    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.040712s)
	I0601 11:04:11.121701    8092 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:11.299629    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:12.319459    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:12.319459    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0198177s)
	W0601 11:04:12.319459    8092 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:04:12.319459    8092 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:12.331794    8092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:04:12.338808    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:13.353425    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:13.353425    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.014606s)
	I0601 11:04:13.353425    8092 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:13.563022    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:14.616727    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:14.616727    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0536217s)
	W0601 11:04:14.616727    8092 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:04:14.616727    8092 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:14.616727    8092 start.go:134] duration metric: createHost completed in 12.9857445s
	I0601 11:04:14.627832    8092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:04:14.633488    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:15.654704    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:15.654947    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0212041s)
	I0601 11:04:15.655190    8092 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:15.999741    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:17.039870    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:17.039940    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0398405s)
	W0601 11:04:17.040237    8092 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:04:17.040310    8092 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:17.051149    8092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:04:17.056946    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:18.097644    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:18.097644    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0406866s)
	I0601 11:04:18.097644    8092 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:18.339138    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:04:19.342904    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:19.342904    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0036647s)
	W0601 11:04:19.342904    8092 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:04:19.342904    8092 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:19.342904    8092 fix.go:57] fixHost completed within 48.8135339s
	I0601 11:04:19.342904    8092 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 48.8139822s
	W0601 11:04:19.343558    8092 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	W0601 11:04:19.343716    8092 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:04:19.343716    8092 start.go:614] Will try again in 5 seconds ...
	I0601 11:04:24.352038    8092 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:04:24.352038    8092 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 0s
	I0601 11:04:24.352038    8092 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:04:24.352567    8092 fix.go:55] fixHost starting: 
	I0601 11:04:24.365539    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:25.382126    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:25.382126    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0165758s)
	I0601 11:04:25.382126    8092 fix.go:103] recreateIfNeeded on multinode-20220601110036-9404: state= err=unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:25.382126    8092 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:04:25.386129    8092 out.go:177] * docker "multinode-20220601110036-9404" container is missing, will recreate.
	I0601 11:04:25.389161    8092 delete.go:124] DEMOLISHING multinode-20220601110036-9404 ...
	I0601 11:04:25.401129    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:26.416125    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:26.416203    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0146354s)
	W0601 11:04:26.416203    8092 stop.go:75] unable to get state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:26.416353    8092 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:26.429363    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:27.468821    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:27.468882    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0392294s)
	I0601 11:04:27.468882    8092 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:27.475937    8092 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:04:28.494320    8092 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:28.494457    8092 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0183717s)
	I0601 11:04:28.494633    8092 kic.go:356] could not find the container multinode-20220601110036-9404 to remove it. will try anyways
	I0601 11:04:28.501521    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:29.517320    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:29.517320    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0157874s)
	W0601 11:04:29.517320    8092 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:29.525807    8092 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0"
	W0601 11:04:30.543932    8092 cli_runner.go:211] docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:04:30.544049    8092 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": (1.0180522s)
	I0601 11:04:30.544049    8092 oci.go:625] error shutdown multinode-20220601110036-9404: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:31.566622    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:32.586408    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:32.586408    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0196072s)
	I0601 11:04:32.586507    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:32.586587    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:32.586587    8092 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:33.082427    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:34.107377    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:34.107377    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.024938s)
	I0601 11:04:34.107377    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:34.107377    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:34.107377    8092 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:34.705225    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:35.711025    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:35.711265    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0056214s)
	I0601 11:04:35.711374    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:35.711374    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:35.711476    8092 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:36.621322    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:37.642881    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:37.643037    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0214902s)
	I0601 11:04:37.643148    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:37.643148    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:37.643148    8092 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:39.653962    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:40.706896    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:40.706896    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0528374s)
	I0601 11:04:40.707126    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:40.707154    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:40.707178    8092 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:42.545017    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:43.575363    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:43.575392    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0301935s)
	I0601 11:04:43.575498    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:43.575498    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:43.575601    8092 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:46.256212    8092 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:04:47.266328    8092 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:04:47.266328    8092 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0101044s)
	I0601 11:04:47.266725    8092 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:04:47.266758    8092 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:04:47.266849    8092 oci.go:88] couldn't shut down multinode-20220601110036-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	 
	I0601 11:04:47.274956    8092 cli_runner.go:164] Run: docker rm -f -v multinode-20220601110036-9404
	I0601 11:04:48.280320    8092 cli_runner.go:217] Completed: docker rm -f -v multinode-20220601110036-9404: (1.0053526s)
	I0601 11:04:48.287767    8092 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:04:49.303256    8092 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:49.303387    8092 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0152471s)
	I0601 11:04:49.310298    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:04:50.320470    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:04:50.320470    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0101603s)
	I0601 11:04:50.326471    8092 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:04:50.326471    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:04:51.339282    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:51.339282    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0127992s)
	I0601 11:04:51.339282    8092 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:04:51.339495    8092 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	W0601 11:04:51.340461    8092 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:04:51.340461    8092 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:04:52.351215    8092 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:04:52.356368    8092 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:04:52.356825    8092 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:04:52.356825    8092 client.go:168] LocalClient.Create starting
	I0601 11:04:52.357310    8092 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:04:52.357310    8092 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:52.357856    8092 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:52.358044    8092 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:04:52.358044    8092 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:52.358044    8092 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:52.366099    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:04:53.384191    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:04:53.384342    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0178473s)
	I0601 11:04:53.391466    8092 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:04:53.391466    8092 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:04:54.438625    8092 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:54.438625    8092 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0471479s)
	I0601 11:04:54.438625    8092 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:04:54.438625    8092 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:04:54.447062    8092 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:04:55.496280    8092 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0490285s)
	I0601 11:04:55.512769    8092 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007144b0] amended:false}} dirty:map[] misses:0}
	I0601 11:04:55.512769    8092 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:04:55.528757    8092 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007144b0] amended:true}} dirty:map[192.168.49.0:0xc0007144b0 192.168.58.0:0xc00014e678] misses:0}
	I0601 11:04:55.529345    8092 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:04:55.529345    8092 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:04:55.537446    8092 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:04:56.553671    8092 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	I0601 11:04:56.553922    8092 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: (1.0159963s)
	E0601 11:04:56.553922    8092 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.58.0/24: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 150d25e124028ccd0f9dd73ef54c5c62db1c7369f22ffecd052f5b765e76a472 (br-150d25e12402): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:04:56.553922    8092 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 150d25e124028ccd0f9dd73ef54c5c62db1c7369f22ffecd052f5b765e76a472 (br-150d25e12402): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 150d25e124028ccd0f9dd73ef54c5c62db1c7369f22ffecd052f5b765e76a472 (br-150d25e12402): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:04:56.567356    8092 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:04:57.598682    8092 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.031314s)
	I0601 11:04:57.606932    8092 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:04:58.648352    8092 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:04:58.648352    8092 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.041408s)
	I0601 11:04:58.648352    8092 client.go:171] LocalClient.Create took 6.2913375s
	I0601 11:05:00.660254    8092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:05:00.665281    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:01.652517    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:01.652517    8092 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:01.942215    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:03.000245    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:03.000445    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0578573s)
	W0601 11:05:03.000445    8092 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:05:03.000445    8092 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:03.010439    8092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:05:03.016648    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:04.085367    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:04.085367    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0687072s)
	I0601 11:05:04.085367    8092 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:04.297240    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:05.346104    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:05.346104    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0487303s)
	W0601 11:05:05.346441    8092 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:05:05.346441    8092 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:05.346513    8092 start.go:134] duration metric: createHost completed in 12.9949518s
	I0601 11:05:05.356568    8092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:05:05.362857    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:06.394238    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:06.394238    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0311788s)
	I0601 11:05:06.394470    8092 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:06.721507    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:07.719218    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	W0601 11:05:07.719362    8092 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:05:07.719362    8092 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:07.728648    8092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:05:07.736110    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:08.764189    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:08.764189    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0279536s)
	I0601 11:05:08.764742    8092 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:09.115429    8092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:05:10.126966    8092 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:05:10.126966    8092 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0114446s)
	W0601 11:05:10.127148    8092 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:05:10.127148    8092 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:05:10.127148    8092 fix.go:57] fixHost completed within 45.7740592s
	I0601 11:05:10.127148    8092 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 45.774588s
	W0601 11:05:10.127328    8092 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:05:10.139158    8092 out.go:177] 
	W0601 11:05:10.141268    8092 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	W0601 11:05:10.142266    8092 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:05:10.142379    8092 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:05:10.145879    8092 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20220601110036-9404" : exit status 60
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601110036-9404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0847898s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7557327s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:05:14.552952    8484 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (136.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node delete m03
multinode_test.go:392: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node delete m03: exit status 80 (3.1190815s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_207105384607abbf0a822abec5db82084f27bc08_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:394: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 node delete m03": exit status 80
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr
multinode_test.go:398: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr: exit status 7 (2.750581s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:05:17.944004    7632 out.go:296] Setting OutFile to fd 832 ...
	I0601 11:05:17.996008    7632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:17.996008    7632 out.go:309] Setting ErrFile to fd 764...
	I0601 11:05:17.996008    7632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:18.009008    7632 out.go:303] Setting JSON to false
	I0601 11:05:18.009008    7632 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:05:18.010009    7632 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:05:18.010009    7632 status.go:253] checking status of multinode-20220601110036-9404 ...
	I0601 11:05:18.023004    7632 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:05:20.422321    7632 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:05:20.422321    7632 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (2.3992898s)
	I0601 11:05:20.422321    7632 status.go:328] multinode-20220601110036-9404 host status = "" (err=state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	)
	I0601 11:05:20.422321    7632 status.go:255] multinode-20220601110036-9404 status: &{Name:multinode-20220601110036-9404 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0601 11:05:20.422321    7632 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:05:20.422321    7632 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:400: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1047345s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.808586s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:05:24.345986    3324 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (9.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (31.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 stop
multinode_test.go:312: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 stop: exit status 82 (22.2114116s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	* Stopping node "multinode-20220601110036-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:05:29.679502    1480 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220601110036-9404: stopping schedule-stop service for profile multinode-20220601110036-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:314: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 stop": exit status 82
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status: exit status 7 (2.794751s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:05:49.354649    9284 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:05:49.354649    9284 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr: exit status 7 (2.82425s)

                                                
                                                
-- stdout --
	multinode-20220601110036-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:05:49.610568    6268 out.go:296] Setting OutFile to fd 820 ...
	I0601 11:05:49.665526    6268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:49.665526    6268 out.go:309] Setting ErrFile to fd 264...
	I0601 11:05:49.665526    6268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:49.688246    6268 out.go:303] Setting JSON to false
	I0601 11:05:49.688246    6268 mustload.go:65] Loading cluster: multinode-20220601110036-9404
	I0601 11:05:49.689335    6268 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:05:49.689335    6268 status.go:253] checking status of multinode-20220601110036-9404 ...
	I0601 11:05:49.702642    6268 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:05:52.177175    6268 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:05:52.177267    6268 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (2.4742874s)
	I0601 11:05:52.177460    6268 status.go:328] multinode-20220601110036-9404 host status = "" (err=state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	)
	I0601 11:05:52.177545    6268 status.go:255] multinode-20220601110036-9404 status: &{Name:multinode-20220601110036-9404 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0601 11:05:52.177545    6268 status.go:258] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	E0601 11:05:52.177545    6268 status.go:261] The "multinode-20220601110036-9404" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:331: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr": multinode-20220601110036-9404
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:335: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220601110036-9404 status --alsologtostderr": multinode-20220601110036-9404
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.0960722s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7993134s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:05:56.080555    9892 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (31.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.1670767s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 60 (1m49.955074s)

                                                
                                                
-- stdout --
	* [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	* Pulling base image ...
	* docker "multinode-20220601110036-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220601110036-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:05:57.515144    2444 out.go:296] Setting OutFile to fd 712 ...
	I0601 11:05:57.569111    2444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:57.569111    2444 out.go:309] Setting ErrFile to fd 856...
	I0601 11:05:57.569111    2444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:05:57.580258    2444 out.go:303] Setting JSON to false
	I0601 11:05:57.583718    2444 start.go:115] hostinfo: {"hostname":"minikube2","uptime":13493,"bootTime":1654068064,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:05:57.583718    2444 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:05:57.591827    2444 out.go:177] * [multinode-20220601110036-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:05:57.594479    2444 notify.go:193] Checking for updates...
	I0601 11:05:57.596491    2444 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:05:57.599064    2444 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:05:57.603396    2444 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:05:57.605690    2444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:05:57.609446    2444 config.go:178] Loaded profile config "multinode-20220601110036-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:05:57.610580    2444 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:06:00.257350    2444 docker.go:137] docker version: linux-20.10.14
	I0601 11:06:00.268593    2444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:02.240552    2444 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9719368s)
	I0601 11:06:02.241402    2444 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:06:01.2234731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:02.247890    2444 out.go:177] * Using the docker driver based on existing profile
	I0601 11:06:02.250496    2444 start.go:284] selected driver: docker
	I0601 11:06:02.250496    2444 start.go:806] validating driver "docker" against &{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:06:02.250496    2444 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:06:02.275032    2444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:06:04.332140    2444 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0569418s)
	I0601 11:06:04.332444    2444 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:06:03.2726115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:06:04.447250    2444 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:06:04.447250    2444 cni.go:95] Creating CNI manager for ""
	I0601 11:06:04.447250    2444 cni.go:156] 1 nodes found, recommending kindnet
	I0601 11:06:04.447250    2444 start_flags.go:306] config:
	{Name:multinode-20220601110036-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220601110036-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false}
	I0601 11:06:04.451774    2444 out.go:177] * Starting control plane node multinode-20220601110036-9404 in cluster multinode-20220601110036-9404
	I0601 11:06:04.454656    2444 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:06:04.458274    2444 out.go:177] * Pulling base image ...
	I0601 11:06:04.460707    2444 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:06:04.460707    2444 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:06:04.460707    2444 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:06:04.460707    2444 cache.go:57] Caching tarball of preloaded images
	I0601 11:06:04.461395    2444 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:06:04.461395    2444 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:06:04.461919    2444 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220601110036-9404\config.json ...
	I0601 11:06:05.509549    2444 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:06:05.509617    2444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:06:05.509617    2444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:06:05.509617    2444 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:06:05.509617    2444 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:06:05.510258    2444 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:06:05.510338    2444 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:06:05.510452    2444 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:06:05.510492    2444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:06:07.726153    2444 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-805996953: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-805996953: read-only file system"}
	I0601 11:06:07.726672    2444 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:06:07.726745    2444 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:06:07.726847    2444 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:06:07.726847    2444 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 0s
	I0601 11:06:07.726847    2444 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:06:07.726847    2444 fix.go:55] fixHost starting: 
	I0601 11:06:07.742695    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:08.754787    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:08.754787    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0118352s)
	I0601 11:06:08.754973    2444 fix.go:103] recreateIfNeeded on multinode-20220601110036-9404: state= err=unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:08.754973    2444 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:06:08.759885    2444 out.go:177] * docker "multinode-20220601110036-9404" container is missing, will recreate.
	I0601 11:06:08.762381    2444 delete.go:124] DEMOLISHING multinode-20220601110036-9404 ...
	I0601 11:06:08.773404    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:09.786278    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:09.786278    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0128623s)
	W0601 11:06:09.786278    2444 stop.go:75] unable to get state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:09.786278    2444 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:09.799107    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:10.831802    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:10.831802    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.032683s)
	I0601 11:06:10.831802    2444 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:10.839355    2444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:06:11.843666    2444 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:11.843666    2444 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0042993s)
	I0601 11:06:11.843666    2444 kic.go:356] could not find the container multinode-20220601110036-9404 to remove it. will try anyways
	I0601 11:06:11.850658    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:12.893760    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:12.893760    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0430898s)
	W0601 11:06:12.893760    2444 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:12.899723    2444 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0"
	W0601 11:06:13.910946    2444 cli_runner.go:211] docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:06:13.910946    2444 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": (1.0106325s)
	I0601 11:06:13.910946    2444 oci.go:625] error shutdown multinode-20220601110036-9404: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:14.928376    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:15.958086    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:15.958086    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0294595s)
	I0601 11:06:15.958086    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:15.958086    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:15.958086    2444 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:16.531487    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:17.548767    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:17.548767    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0172686s)
	I0601 11:06:17.548767    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:17.548767    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:17.548767    2444 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:18.641763    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:19.648399    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:19.648399    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0060251s)
	I0601 11:06:19.648399    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:19.648399    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:19.648399    2444 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:20.973138    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:21.978396    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:21.978436    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0051593s)
	I0601 11:06:21.978522    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:21.978599    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:21.978652    2444 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:23.569727    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:24.591211    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:24.591242    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0213132s)
	I0601 11:06:24.591242    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:24.591242    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:24.591242    2444 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:26.947100    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:27.979304    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:27.979304    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0321916s)
	I0601 11:06:27.979304    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:27.979304    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:27.979900    2444 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:32.499865    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:06:33.542539    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:06:33.542539    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0426619s)
	I0601 11:06:33.542539    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:33.542539    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:06:33.542539    2444 oci.go:88] couldn't shut down multinode-20220601110036-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	 
	I0601 11:06:33.549856    2444 cli_runner.go:164] Run: docker rm -f -v multinode-20220601110036-9404
	I0601 11:06:34.593930    2444 cli_runner.go:217] Completed: docker rm -f -v multinode-20220601110036-9404: (1.0439613s)
	I0601 11:06:34.601044    2444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:06:35.624587    2444 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:35.624587    2444 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0235319s)
	I0601 11:06:35.632876    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:36.656381    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:36.656563    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0233439s)
	I0601 11:06:36.663565    2444 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:06:36.663565    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:06:37.675826    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:37.675826    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0122499s)
	I0601 11:06:37.675826    2444 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:06:37.675826    2444 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	W0601 11:06:37.677097    2444 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:06:37.677097    2444 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:06:38.684331    2444 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:06:38.689290    2444 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:06:38.689290    2444 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:06:38.689290    2444 client.go:168] LocalClient.Create starting
	I0601 11:06:38.689887    2444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:06:38.690633    2444 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:38.690633    2444 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:38.691175    2444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:06:38.691349    2444 main.go:134] libmachine: Decoding PEM data...
	I0601 11:06:38.691349    2444 main.go:134] libmachine: Parsing certificate...
	I0601 11:06:38.699606    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:06:39.732494    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:06:39.732494    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0328762s)
	I0601 11:06:39.740000    2444 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:06:39.740534    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:06:40.762047    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:40.762179    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0215018s)
	I0601 11:06:40.762253    2444 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:06:40.762253    2444 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:06:40.769788    2444 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:06:41.808740    2444 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0389401s)
	I0601 11:06:41.828439    2444 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005fa318] misses:0}
	I0601 11:06:41.828439    2444 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:06:41.828439    2444 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:06:41.838972    2444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:06:42.863341    2444 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:42.863341    2444 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: (1.0243577s)
	E0601 11:06:42.863341    2444 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.49.0/24: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 521b0fb3cf26fb46e739b663942167bb16848e50f90c1da7c550d27190fb1b71 (br-521b0fb3cf26): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:06:42.863341    2444 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 521b0fb3cf26fb46e739b663942167bb16848e50f90c1da7c550d27190fb1b71 (br-521b0fb3cf26): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 521b0fb3cf26fb46e739b663942167bb16848e50f90c1da7c550d27190fb1b71 (br-521b0fb3cf26): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:06:42.875991    2444 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:06:43.910980    2444 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0349774s)
	I0601 11:06:43.918738    2444 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:06:44.930218    2444 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:06:44.930249    2444 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0113076s)
	I0601 11:06:44.930324    2444 client.go:171] LocalClient.Create took 6.2409629s
	I0601 11:06:46.950102    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:06:46.955621    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:48.009258    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:48.009418    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0536245s)
	I0601 11:06:48.009418    2444 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:48.190042    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:49.177443    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	W0601 11:06:49.177443    2444 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:06:49.177443    2444 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:49.186490    2444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:06:49.191489    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:50.205633    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:50.205633    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0131326s)
	I0601 11:06:50.205633    2444 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:50.415846    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:51.425311    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:51.425311    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0094528s)
	W0601 11:06:51.425311    2444 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:06:51.425311    2444 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:51.425311    2444 start.go:134] duration metric: createHost completed in 12.740834s
	I0601 11:06:51.436525    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:06:51.442487    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:52.473530    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:52.473580    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0309764s)
	I0601 11:06:52.473580    2444 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:52.814964    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:53.823365    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:53.823612    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0083898s)
	W0601 11:06:53.823810    2444 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:06:53.823810    2444 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:53.834980    2444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:06:53.840817    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:54.869878    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:54.869912    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0289422s)
	I0601 11:06:54.870219    2444 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:55.112720    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:06:56.128178    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:06:56.128265    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0154464s)
	W0601 11:06:56.128312    2444 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:06:56.128312    2444 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:06:56.128312    2444 fix.go:57] fixHost completed within 48.4009132s
	I0601 11:06:56.128312    2444 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 48.4009132s
	W0601 11:06:56.133927    2444 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	W0601 11:06:56.134457    2444 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:06:56.134457    2444 start.go:614] Will try again in 5 seconds ...
	I0601 11:07:01.137899    2444 start.go:352] acquiring machines lock for multinode-20220601110036-9404: {Name:mk61810b7619e82ed9a43b6c44c060dca72b11e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:07:01.137899    2444 start.go:356] acquired machines lock for "multinode-20220601110036-9404" in 0s
	I0601 11:07:01.138488    2444 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:07:01.138488    2444 fix.go:55] fixHost starting: 
	I0601 11:07:01.151791    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:02.179958    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:02.179958    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0272122s)
	I0601 11:07:02.179958    2444 fix.go:103] recreateIfNeeded on multinode-20220601110036-9404: state= err=unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:02.179958    2444 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:07:02.186260    2444 out.go:177] * docker "multinode-20220601110036-9404" container is missing, will recreate.
	I0601 11:07:02.189172    2444 delete.go:124] DEMOLISHING multinode-20220601110036-9404 ...
	I0601 11:07:02.201163    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:03.244341    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:03.244341    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0431657s)
	W0601 11:07:03.244341    2444 stop.go:75] unable to get state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:03.244341    2444 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:03.259556    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:04.325605    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:04.325605    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.066037s)
	I0601 11:07:04.325605    2444 delete.go:82] Unable to get host status for multinode-20220601110036-9404, assuming it has already been deleted: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:04.333027    2444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:07:05.383870    2444 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:05.384115    2444 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0508313s)
	I0601 11:07:05.384115    2444 kic.go:356] could not find the container multinode-20220601110036-9404 to remove it. will try anyways
	I0601 11:07:05.391292    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:06.428022    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:06.428301    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.036719s)
	W0601 11:07:06.428400    2444 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:06.436176    2444 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0"
	W0601 11:07:07.487716    2444 cli_runner.go:211] docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:07:07.487716    2444 cli_runner.go:217] Completed: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": (1.0515279s)
	I0601 11:07:07.487716    2444 oci.go:625] error shutdown multinode-20220601110036-9404: docker exec --privileged -t multinode-20220601110036-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:08.495088    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:09.482932    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:09.482932    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:09.482932    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:09.482932    2444 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:09.974973    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:10.990258    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:10.990258    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0152729s)
	I0601 11:07:10.990258    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:10.990258    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:10.990258    2444 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:11.594336    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:12.632169    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:12.632169    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0378215s)
	I0601 11:07:12.632169    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:12.632169    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:12.632169    2444 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:13.544012    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:14.555030    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:14.555091    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0108179s)
	I0601 11:07:14.555091    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:14.555091    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:14.555091    2444 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:16.567457    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:17.597467    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:17.597525    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0298511s)
	I0601 11:07:17.597669    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:17.597669    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:17.597669    2444 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:19.438237    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:20.443559    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:20.443626    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0051383s)
	I0601 11:07:20.443692    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:20.443692    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:20.443692    2444 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:23.122934    2444 cli_runner.go:164] Run: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}
	W0601 11:07:24.152480    2444 cli_runner.go:211] docker container inspect multinode-20220601110036-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:07:24.152794    2444 cli_runner.go:217] Completed: docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: (1.0295056s)
	I0601 11:07:24.152877    2444 oci.go:637] temporary error verifying shutdown: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:24.152943    2444 oci.go:639] temporary error: container multinode-20220601110036-9404 status is  but expect it to be exited
	I0601 11:07:24.153095    2444 oci.go:88] couldn't shut down multinode-20220601110036-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	 
	I0601 11:07:24.157029    2444 cli_runner.go:164] Run: docker rm -f -v multinode-20220601110036-9404
	I0601 11:07:25.172293    2444 cli_runner.go:217] Completed: docker rm -f -v multinode-20220601110036-9404: (1.0152524s)
	I0601 11:07:25.179169    2444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220601110036-9404
	W0601 11:07:26.180314    2444 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:26.180444    2444 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} multinode-20220601110036-9404: (1.0009624s)
	I0601 11:07:26.187870    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:07:27.196204    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:07:27.196204    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0081816s)
	I0601 11:07:27.203763    2444 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:07:27.203763    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:07:28.223347    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:28.223347    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0195727s)
	I0601 11:07:28.223347    2444 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:07:28.223347    2444 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	W0601 11:07:28.224323    2444 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:07:28.224323    2444 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:07:29.228362    2444 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:07:29.232358    2444 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:07:29.232358    2444 start.go:165] libmachine.API.Create for "multinode-20220601110036-9404" (driver="docker")
	I0601 11:07:29.232358    2444 client.go:168] LocalClient.Create starting
	I0601 11:07:29.233177    2444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:07:29.233380    2444 main.go:134] libmachine: Decoding PEM data...
	I0601 11:07:29.233450    2444 main.go:134] libmachine: Parsing certificate...
	I0601 11:07:29.233596    2444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:07:29.233786    2444 main.go:134] libmachine: Decoding PEM data...
	I0601 11:07:29.233786    2444 main.go:134] libmachine: Parsing certificate...
	I0601 11:07:29.242568    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:07:30.246292    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:07:30.246368    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0035627s)
	I0601 11:07:30.256619    2444 network_create.go:272] running [docker network inspect multinode-20220601110036-9404] to gather additional debugging logs...
	I0601 11:07:30.256841    2444 cli_runner.go:164] Run: docker network inspect multinode-20220601110036-9404
	W0601 11:07:31.277575    2444 cli_runner.go:211] docker network inspect multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:31.277575    2444 cli_runner.go:217] Completed: docker network inspect multinode-20220601110036-9404: (1.0207224s)
	I0601 11:07:31.277575    2444 network_create.go:275] error running [docker network inspect multinode-20220601110036-9404]: docker network inspect multinode-20220601110036-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220601110036-9404
	I0601 11:07:31.277575    2444 network_create.go:277] output of [docker network inspect multinode-20220601110036-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220601110036-9404
	
	** /stderr **
	I0601 11:07:31.285715    2444 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:07:32.313560    2444 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.027726s)
	I0601 11:07:32.329976    2444 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005fa318] amended:false}} dirty:map[] misses:0}
	I0601 11:07:32.329976    2444 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:07:32.345254    2444 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005fa318] amended:true}} dirty:map[192.168.49.0:0xc0005fa318 192.168.58.0:0xc000124480] misses:0}
	I0601 11:07:32.345254    2444 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:07:32.345937    2444 network_create.go:115] attempt to create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:07:32.352268    2444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404
	W0601 11:07:33.426573    2444 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:33.426573    2444 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: (1.0742926s)
	E0601 11:07:33.426770    2444 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404 192.168.58.0/24: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a816c8cca2070baf2e742a0b9c5311e1473d695e47cb84dee17ecdcec99e3dcd (br-a816c8cca207): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:07:33.426915    2444 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a816c8cca2070baf2e742a0b9c5311e1473d695e47cb84dee17ecdcec99e3dcd (br-a816c8cca207): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a816c8cca2070baf2e742a0b9c5311e1473d695e47cb84dee17ecdcec99e3dcd (br-a816c8cca207): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:07:33.440248    2444 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:07:34.483283    2444 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.043023s)
	I0601 11:07:34.489931    2444 cli_runner.go:164] Run: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:07:35.513009    2444 cli_runner.go:211] docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:07:35.513085    2444 cli_runner.go:217] Completed: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0229058s)
	I0601 11:07:35.513113    2444 client.go:171] LocalClient.Create took 6.2806831s
	I0601 11:07:37.535447    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:37.541469    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:38.573088    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:38.573224    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0314578s)
	I0601 11:07:38.573224    2444 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:38.863953    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:39.912190    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:39.912252    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0481071s)
	W0601 11:07:39.912384    2444 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:07:39.912434    2444 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:39.923265    2444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:39.928610    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:40.973383    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:40.973505    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0446421s)
	I0601 11:07:40.973682    2444 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:41.189485    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:42.272670    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:42.272840    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0830723s)
	W0601 11:07:42.272959    2444 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:07:42.273067    2444 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:42.273102    2444 start.go:134] duration metric: createHost completed in 13.0445904s
	I0601 11:07:42.282989    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:07:42.289034    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:43.333161    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:43.333161    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0441151s)
	I0601 11:07:43.333161    2444 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:43.668421    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:44.721675    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:44.721675    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0531602s)
	W0601 11:07:44.721842    2444 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:07:44.721939    2444 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:44.731632    2444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:07:44.737243    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:45.805234    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:45.805234    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0679781s)
	I0601 11:07:45.805234    2444 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:46.155358    2444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404
	W0601 11:07:47.196035    2444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404 returned with exit code 1
	I0601 11:07:47.196132    2444 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: (1.0404501s)
	W0601 11:07:47.196477    2444 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	W0601 11:07:47.196571    2444 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220601110036-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601110036-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	I0601 11:07:47.196641    2444 fix.go:57] fixHost completed within 46.0576274s
	I0601 11:07:47.196689    2444 start.go:81] releasing machines lock for "multinode-20220601110036-9404", held for 46.0577839s
	W0601 11:07:47.196938    2444 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	I0601 11:07:47.203890    2444 out.go:177] 
	W0601 11:07:47.207340    2444 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404 container: docker volume create multinode-20220601110036-9404 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404: read-only file system
	
	W0601 11:07:47.207874    2444 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:07:47.208126    2444 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:07:47.210370    2444 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1041213s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7650519s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:07:51.254976    5752 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (115.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (164.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220601110036-9404
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404-m01 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404-m01 --driver=docker: exit status 60 (1m14.6445537s)

                                                
                                                
-- stdout --
	* [multinode-20220601110036-9404-m01] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220601110036-9404-m01 in cluster multinode-20220601110036-9404-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220601110036-9404-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:08:06.330881    8200 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404-m01 192.168.49.0/24: create docker network multinode-20220601110036-9404-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0d421b8a7989852b7024093fe7a438758a9b84829c71f587e1d2297c71a30910 (br-0d421b8a7989): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0d421b8a7989852b7024093fe7a438758a9b84829c71f587e1d2297c71a30910 (br-0d421b8a7989): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m01 container: docker volume create multinode-20220601110036-9404-m01 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m01': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m01: read-only file system
	
	E0601 11:08:52.626998    8200 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404-m01 192.168.58.0/24: create docker network multinode-20220601110036-9404-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4ee3d702431d8d76f462062941f2dfd0088e46bffddf49b6e0eeeb16e8b7225a (br-4ee3d702431d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4ee3d702431d8d76f462062941f2dfd0088e46bffddf49b6e0eeeb16e8b7225a (br-4ee3d702431d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404-m01" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m01 container: docker volume create multinode-20220601110036-9404-m01 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m01': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m01: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m01 container: docker volume create multinode-20220601110036-9404-m01 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m01': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m01: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404-m02 --driver=docker
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404-m02 --driver=docker: exit status 60 (1m14.3254081s)

                                                
                                                
-- stdout --
	* [multinode-20220601110036-9404-m02] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node multinode-20220601110036-9404-m02 in cluster multinode-20220601110036-9404-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220601110036-9404-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:09:20.604491    7420 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404-m02 192.168.49.0/24: create docker network multinode-20220601110036-9404-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 17c4eade7aa6cf95d25037c5f9ee7fbdb9f5adfbce8d55f039f0d234f1ad12e4 (br-17c4eade7aa6): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 17c4eade7aa6cf95d25037c5f9ee7fbdb9f5adfbce8d55f039f0d234f1ad12e4 (br-17c4eade7aa6): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m02 container: docker volume create multinode-20220601110036-9404-m02 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m02': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m02: read-only file system
	
	E0601 11:10:07.114214    7420 network_create.go:104] error while trying to create docker network multinode-20220601110036-9404-m02 192.168.58.0/24: create docker network multinode-20220601110036-9404-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 25abffca38190591cb11eb248f3a8d526c1f37d427b14b849a188f26774e9f2a (br-25abffca3819): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220601110036-9404-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220601110036-9404-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 25abffca38190591cb11eb248f3a8d526c1f37d427b14b849a188f26774e9f2a (br-25abffca3819): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220601110036-9404-m02" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m02 container: docker volume create multinode-20220601110036-9404-m02 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m02': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m02: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220601110036-9404-m02 container: docker volume create multinode-20220601110036-9404-m02 --label name.minikube.sigs.k8s.io=multinode-20220601110036-9404-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220601110036-9404-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220601110036-9404-m02': mkdir /var/lib/docker/volumes/multinode-20220601110036-9404-m02: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:460: failed to start profile. args "out/minikube-windows-amd64.exe start -p multinode-20220601110036-9404-m02 --driver=docker" : exit status 60
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220601110036-9404
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220601110036-9404: exit status 80 (3.0887779s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_24.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220601110036-9404-m02
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220601110036-9404-m02: (7.9933041s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220601110036-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220601110036-9404: exit status 1 (1.1215109s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220601110036-9404 -n multinode-20220601110036-9404: exit status 7 (2.7982975s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:10:35.577038    7928 status.go:247] status error: host: state: unknown state "multinode-20220601110036-9404": docker container inspect multinode-20220601110036-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220601110036-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220601110036-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (164.32s)

                                                
                                    
x
+
TestPreload (86.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220601111047-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-20220601111047-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: exit status 60 (1m14.896793s)

                                                
                                                
-- stdout --
	* [test-preload-20220601111047-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220601111047-9404 in cluster test-preload-20220601111047-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20220601111047-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:10:47.873977    1960 out.go:296] Setting OutFile to fd 696 ...
	I0601 11:10:47.926012    1960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:10:47.926012    1960 out.go:309] Setting ErrFile to fd 760...
	I0601 11:10:47.926012    1960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:10:47.939676    1960 out.go:303] Setting JSON to false
	I0601 11:10:47.942456    1960 start.go:115] hostinfo: {"hostname":"minikube2","uptime":13783,"bootTime":1654068064,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:10:47.942980    1960 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:10:47.948963    1960 out.go:177] * [test-preload-20220601111047-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:10:47.952502    1960 notify.go:193] Checking for updates...
	I0601 11:10:47.956946    1960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:10:47.959691    1960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:10:47.961800    1960 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:10:47.964304    1960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:10:47.967581    1960 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:10:47.967581    1960 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:10:50.476251    1960 docker.go:137] docker version: linux-20.10.14
	I0601 11:10:50.484131    1960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:10:52.552299    1960 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.068145s)
	I0601 11:10:52.552909    1960 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:10:51.488106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:10:52.559517    1960 out.go:177] * Using the docker driver based on user configuration
	I0601 11:10:52.561671    1960 start.go:284] selected driver: docker
	I0601 11:10:52.561671    1960 start.go:806] validating driver "docker" against <nil>
	I0601 11:10:52.561671    1960 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:10:52.688775    1960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:10:54.730689    1960 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0418912s)
	I0601 11:10:54.730689    1960 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:10:53.7188753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:10:54.730689    1960 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:10:54.731925    1960 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:10:54.734999    1960 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:10:54.737237    1960 cni.go:95] Creating CNI manager for ""
	I0601 11:10:54.737237    1960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:10:54.737237    1960 start_flags.go:306] config:
	{Name:test-preload-20220601111047-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601111047-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:10:54.740665    1960 out.go:177] * Starting control plane node test-preload-20220601111047-9404 in cluster test-preload-20220601111047-9404
	I0601 11:10:54.743355    1960 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:10:54.746032    1960 out.go:177] * Pulling base image ...
	I0601 11:10:54.748738    1960 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 11:10:54.748738    1960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0601 11:10:54.749634    1960 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220601111047-9404\config.json ...
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0601 11:10:54.749634    1960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220601111047-9404\config.json: {Name:mk2a0ecc3f01ec44271707cf055502e2ff4a3ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:10:54.749634    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0601 11:10:54.915328    1960 cache.go:107] acquiring lock: {Name:mk2bed4c2f349144087ca9b4676d08589a5f3b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.916144    1960 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:10:54.918680    1960 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.918525    1960 cache.go:107] acquiring lock: {Name:mkb269f15b2e3b2569308dbf84de26df267b2fcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.919058    1960 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0601 11:10:54.919216    1960 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 169.5057ms
	I0601 11:10:54.919216    1960 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0601 11:10:54.919480    1960 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 11:10:54.920404    1960 cache.go:107] acquiring lock: {Name:mkfe379c4c474168d5a5fd2dde0e9bf1347e993b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.920860    1960 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0601 11:10:54.925773    1960 cache.go:107] acquiring lock: {Name:mkef9a3d9e3cbb1fe114c12bec441ddb11fca0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.925773    1960 cache.go:107] acquiring lock: {Name:mkef49659bc6e08b20a8521eb6ce4fb712ad39c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.926073    1960 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0601 11:10:54.926073    1960 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:10:54.926607    1960 cache.go:107] acquiring lock: {Name:mk7af4d324ae5378e4084d0d909beff30d29e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.927215    1960 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:10:54.933928    1960 cache.go:107] acquiring lock: {Name:mk965b06109155c0e187b8b69e2b0548d9bccb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:54.933928    1960 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:10:54.935484    1960 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:10:54.939477    1960 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0601 11:10:54.941501    1960 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0601 11:10:54.958505    1960 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:10:54.973487    1960 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:10:54.990474    1960 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0601 11:10:55.006490    1960 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	W0601 11:10:55.177615    1960 image.go:190] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0601 11:10:55.419677    1960 image.go:190] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:55.420209    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0601 11:10:55.604025    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0601 11:10:55.666983    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0601 11:10:55.668036    1960 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 918.3918ms
	I0601 11:10:55.668036    1960 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	W0601 11:10:55.678603    1960 image.go:190] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:55.846462    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0601 11:10:55.919709    1960 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:10:55.919709    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:10:55.919709    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:10:55.919709    1960 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:10:55.919709    1960 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:10:55.919709    1960 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:10:55.919709    1960 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:10:55.919709    1960 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:10:55.919709    1960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	W0601 11:10:55.953711    1960 image.go:190] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:55.982853    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0601 11:10:55.982853    1960 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 1.2332049s
	I0601 11:10:55.983385    1960 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0601 11:10:56.116060    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	W0601 11:10:56.187677    1960 image.go:190] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:56.406554    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	W0601 11:10:56.454157    1960 image.go:190] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:56.688566    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	W0601 11:10:56.736351    1960 image.go:190] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0601 11:10:57.108536    1960 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0601 11:10:57.120535    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0601 11:10:57.121440    1960 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 2.3717799s
	I0601 11:10:57.121440    1960 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0601 11:10:57.133704    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0601 11:10:57.134719    1960 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 2.3840433s
	I0601 11:10:57.134775    1960 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0601 11:10:57.504340    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0601 11:10:57.504715    1960 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 2.7549994s
	I0601 11:10:57.504715    1960 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0601 11:10:57.587073    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0601 11:10:57.587073    1960 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 2.8374069s
	I0601 11:10:57.587073    1960 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0601 11:10:57.880112    1960 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0601 11:10:57.881115    1960 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 3.1314129s
	I0601 11:10:57.881115    1960 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0601 11:10:57.881115    1960 cache.go:87] Successfully saved all images to host disk.
	I0601 11:10:58.343463    1960 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:10:58.343463    1960 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:10:58.343463    1960 start.go:352] acquiring machines lock for test-preload-20220601111047-9404: {Name:mk962ba44e9d260d3cfb87a3a0b8db3599721d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:10:58.344112    1960 start.go:356] acquired machines lock for "test-preload-20220601111047-9404" in 648.5µs
	I0601 11:10:58.344240    1960 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220601111047-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601111047-9404 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:10:58.344240    1960 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:10:58.348165    1960 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:10:58.348811    1960 start.go:165] libmachine.API.Create for "test-preload-20220601111047-9404" (driver="docker")
	I0601 11:10:58.348868    1960 client.go:168] LocalClient.Create starting
	I0601 11:10:58.349474    1960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:10:58.349474    1960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:10:58.349474    1960 main.go:134] libmachine: Parsing certificate...
	I0601 11:10:58.349474    1960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:10:58.350094    1960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:10:58.350259    1960 main.go:134] libmachine: Parsing certificate...
	I0601 11:10:58.359159    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:10:59.415093    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:10:59.415164    1960 cli_runner.go:217] Completed: docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0555412s)
	I0601 11:10:59.423837    1960 network_create.go:272] running [docker network inspect test-preload-20220601111047-9404] to gather additional debugging logs...
	I0601 11:10:59.423902    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404
	W0601 11:11:00.445142    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:00.445396    1960 cli_runner.go:217] Completed: docker network inspect test-preload-20220601111047-9404: (1.0212285s)
	I0601 11:11:00.445467    1960 network_create.go:275] error running [docker network inspect test-preload-20220601111047-9404]: docker network inspect test-preload-20220601111047-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220601111047-9404
	I0601 11:11:00.445467    1960 network_create.go:277] output of [docker network inspect test-preload-20220601111047-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220601111047-9404
	
	** /stderr **
	I0601 11:11:00.452617    1960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:11:01.505049    1960 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0524203s)
	I0601 11:11:01.526857    1960 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004122d0] misses:0}
	I0601 11:11:01.526857    1960 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:11:01.526857    1960 network_create.go:115] attempt to create docker network test-preload-20220601111047-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:11:01.533771    1960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404
	W0601 11:11:02.616155    1960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:02.616155    1960 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: (1.0823714s)
	E0601 11:11:02.616155    1960 network_create.go:104] error while trying to create docker network test-preload-20220601111047-9404 192.168.49.0/24: create docker network test-preload-20220601111047-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0c81324dd66e16355cb06304f592234718f4100f9497f296277c67af02f35f3b (br-0c81324dd66e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:11:02.616155    1960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220601111047-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0c81324dd66e16355cb06304f592234718f4100f9497f296277c67af02f35f3b (br-0c81324dd66e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220601111047-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0c81324dd66e16355cb06304f592234718f4100f9497f296277c67af02f35f3b (br-0c81324dd66e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:11:02.631095    1960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:11:03.691426    1960 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0600231s)
	I0601 11:11:03.698893    1960 cli_runner.go:164] Run: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:11:04.743444    1960 cli_runner.go:211] docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:11:04.743639    1960 cli_runner.go:217] Completed: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0445391s)
	I0601 11:11:04.743746    1960 client.go:171] LocalClient.Create took 6.3948053s
	I0601 11:11:06.769882    1960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:11:06.775725    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:07.806692    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:07.806692    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0309557s)
	I0601 11:11:07.806692    1960 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:08.098570    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:09.156706    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:09.156872    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0581248s)
	W0601 11:11:09.157148    1960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:11:09.157148    1960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:09.168452    1960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:11:09.176427    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:10.215457    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:10.215644    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0388433s)
	I0601 11:11:10.215800    1960 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:10.518132    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:11.526999    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:11.526999    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0088559s)
	W0601 11:11:11.526999    1960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:11:11.526999    1960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:11.526999    1960 start.go:134] duration metric: createHost completed in 13.1826098s
	I0601 11:11:11.526999    1960 start.go:81] releasing machines lock for "test-preload-20220601111047-9404", held for 13.1826436s
	W0601 11:11:11.527733    1960 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	I0601 11:11:11.546474    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:12.566178    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:12.566376    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0195137s)
	I0601 11:11:12.566576    1960 delete.go:82] Unable to get host status for test-preload-20220601111047-9404, assuming it has already been deleted: state: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	W0601 11:11:12.566698    1960 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	I0601 11:11:12.566698    1960 start.go:614] Will try again in 5 seconds ...
	I0601 11:11:17.580913    1960 start.go:352] acquiring machines lock for test-preload-20220601111047-9404: {Name:mk962ba44e9d260d3cfb87a3a0b8db3599721d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:11:17.581385    1960 start.go:356] acquired machines lock for "test-preload-20220601111047-9404" in 274µs
	I0601 11:11:17.581385    1960 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:11:17.581385    1960 fix.go:55] fixHost starting: 
	I0601 11:11:17.595448    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:18.649638    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:18.649638    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0541786s)
	I0601 11:11:18.649638    1960 fix.go:103] recreateIfNeeded on test-preload-20220601111047-9404: state= err=unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:18.649638    1960 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:11:18.672619    1960 out.go:177] * docker "test-preload-20220601111047-9404" container is missing, will recreate.
	I0601 11:11:18.674651    1960 delete.go:124] DEMOLISHING test-preload-20220601111047-9404 ...
	I0601 11:11:18.688617    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:19.731102    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:19.731130    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0422351s)
	W0601 11:11:19.731253    1960 stop.go:75] unable to get state: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:19.731286    1960 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:19.744639    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:20.774361    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:20.774361    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0297105s)
	I0601 11:11:20.774361    1960 delete.go:82] Unable to get host status for test-preload-20220601111047-9404, assuming it has already been deleted: state: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:20.782840    1960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220601111047-9404
	W0601 11:11:21.805134    1960 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:21.805190    1960 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} test-preload-20220601111047-9404: (1.0221734s)
	I0601 11:11:21.805190    1960 kic.go:356] could not find the container test-preload-20220601111047-9404 to remove it. will try anyways
	I0601 11:11:21.811862    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:22.850801    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:22.850801    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0386349s)
	W0601 11:11:22.850969    1960 oci.go:84] error getting container status, will try to delete anyways: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:22.857920    1960 cli_runner.go:164] Run: docker exec --privileged -t test-preload-20220601111047-9404 /bin/bash -c "sudo init 0"
	W0601 11:11:23.877493    1960 cli_runner.go:211] docker exec --privileged -t test-preload-20220601111047-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:11:23.877816    1960 cli_runner.go:217] Completed: docker exec --privileged -t test-preload-20220601111047-9404 /bin/bash -c "sudo init 0": (1.0195609s)
	I0601 11:11:23.877878    1960 oci.go:625] error shutdown test-preload-20220601111047-9404: docker exec --privileged -t test-preload-20220601111047-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:24.891793    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:25.907878    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:25.907878    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0157662s)
	I0601 11:11:25.908018    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:25.908018    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:25.908018    1960 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:26.384440    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:27.406295    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:27.406295    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0215805s)
	I0601 11:11:27.406472    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:27.406472    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:27.406564    1960 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:28.314976    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:29.335648    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:29.335668    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0194143s)
	I0601 11:11:29.335750    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:29.335750    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:29.335750    1960 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:29.994388    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:31.027546    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:31.027546    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0331461s)
	I0601 11:11:31.027546    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:31.027546    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:31.027546    1960 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:32.151885    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:33.183528    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:33.183528    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0316317s)
	I0601 11:11:33.183528    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:33.183528    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:33.183528    1960 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:34.715620    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:35.737167    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:35.737167    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0215361s)
	I0601 11:11:35.737167    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:35.737167    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:35.737167    1960 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:38.785922    1960 cli_runner.go:164] Run: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}
	W0601 11:11:39.812175    1960 cli_runner.go:211] docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:11:39.812175    1960 cli_runner.go:217] Completed: docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: (1.0262414s)
	I0601 11:11:39.812175    1960 oci.go:637] temporary error verifying shutdown: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:39.812175    1960 oci.go:639] temporary error: container test-preload-20220601111047-9404 status is  but expect it to be exited
	I0601 11:11:39.812175    1960 oci.go:88] couldn't shut down test-preload-20220601111047-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	 
	I0601 11:11:39.819817    1960 cli_runner.go:164] Run: docker rm -f -v test-preload-20220601111047-9404
	I0601 11:11:40.855257    1960 cli_runner.go:217] Completed: docker rm -f -v test-preload-20220601111047-9404: (1.035224s)
	I0601 11:11:40.862285    1960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220601111047-9404
	W0601 11:11:41.875060    1960 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:41.875060    1960 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} test-preload-20220601111047-9404: (1.0127632s)
	I0601 11:11:41.882587    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:11:42.879847    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:11:42.888734    1960 network_create.go:272] running [docker network inspect test-preload-20220601111047-9404] to gather additional debugging logs...
	I0601 11:11:42.888734    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404
	W0601 11:11:43.909793    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:43.909880    1960 cli_runner.go:217] Completed: docker network inspect test-preload-20220601111047-9404: (1.021047s)
	I0601 11:11:43.909880    1960 network_create.go:275] error running [docker network inspect test-preload-20220601111047-9404]: docker network inspect test-preload-20220601111047-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220601111047-9404
	I0601 11:11:43.909880    1960 network_create.go:277] output of [docker network inspect test-preload-20220601111047-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220601111047-9404
	
	** /stderr **
	W0601 11:11:43.912529    1960 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:11:43.912529    1960 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:11:44.917902    1960 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:11:44.926842    1960 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:11:44.927444    1960 start.go:165] libmachine.API.Create for "test-preload-20220601111047-9404" (driver="docker")
	I0601 11:11:44.927444    1960 client.go:168] LocalClient.Create starting
	I0601 11:11:44.927554    1960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:11:44.928086    1960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:11:44.928276    1960 main.go:134] libmachine: Parsing certificate...
	I0601 11:11:44.928338    1960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:11:44.928338    1960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:11:44.928338    1960 main.go:134] libmachine: Parsing certificate...
	I0601 11:11:44.937086    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:11:45.949523    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:11:45.949523    1960 cli_runner.go:217] Completed: docker network inspect test-preload-20220601111047-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0124257s)
	I0601 11:11:45.957607    1960 network_create.go:272] running [docker network inspect test-preload-20220601111047-9404] to gather additional debugging logs...
	I0601 11:11:45.957607    1960 cli_runner.go:164] Run: docker network inspect test-preload-20220601111047-9404
	W0601 11:11:46.966090    1960 cli_runner.go:211] docker network inspect test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:46.966090    1960 cli_runner.go:217] Completed: docker network inspect test-preload-20220601111047-9404: (1.0084721s)
	I0601 11:11:46.966090    1960 network_create.go:275] error running [docker network inspect test-preload-20220601111047-9404]: docker network inspect test-preload-20220601111047-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220601111047-9404
	I0601 11:11:46.966090    1960 network_create.go:277] output of [docker network inspect test-preload-20220601111047-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220601111047-9404
	
	** /stderr **
	I0601 11:11:46.974080    1960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:11:48.002137    1960 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0278763s)
	I0601 11:11:48.018756    1960 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004122d0] amended:false}} dirty:map[] misses:0}
	I0601 11:11:48.018756    1960 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:11:48.036026    1960 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004122d0] amended:true}} dirty:map[192.168.49.0:0xc0004122d0 192.168.58.0:0xc0004126d8] misses:0}
	I0601 11:11:48.036026    1960 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:11:48.036026    1960 network_create.go:115] attempt to create docker network test-preload-20220601111047-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:11:48.043909    1960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404
	W0601 11:11:49.032739    1960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404 returned with exit code 1
	E0601 11:11:49.032739    1960 network_create.go:104] error while trying to create docker network test-preload-20220601111047-9404 192.168.58.0/24: create docker network test-preload-20220601111047-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 578425071b7aa7812b40a0a63e44fb535edac6280b3352d23e6505e7aa556318 (br-578425071b7a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:11:49.032739    1960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220601111047-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 578425071b7aa7812b40a0a63e44fb535edac6280b3352d23e6505e7aa556318 (br-578425071b7a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220601111047-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601111047-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 578425071b7aa7812b40a0a63e44fb535edac6280b3352d23e6505e7aa556318 (br-578425071b7a): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:11:49.047728    1960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:11:50.091341    1960 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0435476s)
	I0601 11:11:50.099092    1960 cli_runner.go:164] Run: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:11:51.143358    1960 cli_runner.go:211] docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:11:51.143400    1960 cli_runner.go:217] Completed: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: (1.04414s)
	I0601 11:11:51.143649    1960 client.go:171] LocalClient.Create took 6.2161356s
	I0601 11:11:53.160194    1960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:11:53.165916    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:54.189478    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:54.189478    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0235505s)
	I0601 11:11:54.189478    1960 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:54.530287    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:55.586756    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:55.586970    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.056208s)
	W0601 11:11:55.586970    1960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:11:55.586970    1960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:55.596671    1960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:11:55.603541    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:56.632853    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:56.632925    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0290477s)
	I0601 11:11:56.633098    1960 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:56.868902    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:57.872008    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:57.872008    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0030947s)
	W0601 11:11:57.872008    1960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:11:57.872008    1960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:57.872008    1960 start.go:134] duration metric: createHost completed in 12.9539616s
	I0601 11:11:57.883482    1960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:11:57.889176    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:11:58.918486    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:11:58.918486    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0292984s)
	I0601 11:11:58.918486    1960 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:11:59.169297    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:12:00.198946    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:12:00.198946    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0296377s)
	W0601 11:12:00.198946    1960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:12:00.198946    1960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:12:00.210556    1960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:12:00.217118    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:12:01.238476    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:12:01.238476    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0211851s)
	I0601 11:12:01.238815    1960 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:12:01.449518    1960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404
	W0601 11:12:02.503769    1960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404 returned with exit code 1
	I0601 11:12:02.503769    1960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: (1.0539351s)
	W0601 11:12:02.504113    1960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	
	W0601 11:12:02.504184    1960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220601111047-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601111047-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404
	I0601 11:12:02.504184    1960 fix.go:57] fixHost completed within 44.9222947s
	I0601 11:12:02.504184    1960 start.go:81] releasing machines lock for "test-preload-20220601111047-9404", held for 44.9222947s
	W0601 11:12:02.504535    1960 out.go:239] * Failed to start docker container. Running "minikube delete -p test-preload-20220601111047-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p test-preload-20220601111047-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	I0601 11:12:02.510152    1960 out.go:177] 
	W0601 11:12:02.515076    1960 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220601111047-9404 container: docker volume create test-preload-20220601111047-9404 --label name.minikube.sigs.k8s.io=test-preload-20220601111047-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220601111047-9404: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220601111047-9404': mkdir /var/lib/docker/volumes/test-preload-20220601111047-9404: read-only file system
	
	W0601 11:12:02.515076    1960 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:12:02.515076    1960 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:12:02.518206    1960 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-windows-amd64.exe start -p test-preload-20220601111047-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0 failed: exit status 60
panic.go:482: *** TestPreload FAILED at 2022-06-01 11:12:02.6240674 +0000 GMT m=+2932.890597601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220601111047-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect test-preload-20220601111047-9404: exit status 1 (1.1286054s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: test-preload-20220601111047-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220601111047-9404 -n test-preload-20220601111047-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220601111047-9404 -n test-preload-20220601111047-9404: exit status 7 (2.8710598s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:12:06.599677    8052 status.go:247] status error: host: state: unknown state "test-preload-20220601111047-9404": docker container inspect test-preload-20220601111047-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220601111047-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20220601111047-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20220601111047-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220601111047-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220601111047-9404: (7.998937s)
--- FAIL: TestPreload (86.98s)

                                                
                                    
x
+
TestScheduledStopWindows (86.11s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220601111214-9404 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-20220601111214-9404 --memory=2048 --driver=docker: exit status 60 (1m14.2615119s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20220601111214-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220601111214-9404 in cluster scheduled-stop-20220601111214-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220601111214-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:12:29.117741   10128 network_create.go:104] error while trying to create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24: create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b5c414e7d750e11483ebf21989bed9e001c8758117998b43b60181c12d117ef (br-3b5c414e7d75): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b5c414e7d750e11483ebf21989bed9e001c8758117998b43b60181c12d117ef (br-3b5c414e7d75): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	E0601 11:13:15.405290   10128 network_create.go:104] error while trying to create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24: create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3f26dc100721ff7691d3ac38df0fed3f457761504dbbd2a7495272cf993208f (br-d3f26dc10072): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3f26dc100721ff7691d3ac38df0fed3f457761504dbbd2a7495272cf993208f (br-d3f26dc10072): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220601111214-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 60

                                                
                                                
-- stdout --
	* [scheduled-stop-20220601111214-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node scheduled-stop-20220601111214-9404 in cluster scheduled-stop-20220601111214-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220601111214-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:12:29.117741   10128 network_create.go:104] error while trying to create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24: create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b5c414e7d750e11483ebf21989bed9e001c8758117998b43b60181c12d117ef (br-3b5c414e7d75): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220601111214-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3b5c414e7d750e11483ebf21989bed9e001c8758117998b43b60181c12d117ef (br-3b5c414e7d75): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	E0601 11:13:15.405290   10128 network_create.go:104] error while trying to create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24: create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3f26dc100721ff7691d3ac38df0fed3f457761504dbbd2a7495272cf993208f (br-d3f26dc10072): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220601111214-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true scheduled-stop-20220601111214-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3f26dc100721ff7691d3ac38df0fed3f457761504dbbd2a7495272cf993208f (br-d3f26dc10072): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220601111214-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220601111214-9404 container: docker volume create scheduled-stop-20220601111214-9404 --label name.minikube.sigs.k8s.io=scheduled-stop-20220601111214-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220601111214-9404: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220601111214-9404': mkdir /var/lib/docker/volumes/scheduled-stop-20220601111214-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
panic.go:482: *** TestScheduledStopWindows FAILED at 2022-06-01 11:13:28.8952184 +0000 GMT m=+3019.160787801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20220601111214-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect scheduled-stop-20220601111214-9404: exit status 1 (1.0953394s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: scheduled-stop-20220601111214-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220601111214-9404 -n scheduled-stop-20220601111214-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220601111214-9404 -n scheduled-stop-20220601111214-9404: exit status 7 (2.820135s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:13:32.787643    1792 status.go:247] status error: host: state: unknown state "scheduled-stop-20220601111214-9404": docker container inspect scheduled-stop-20220601111214-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20220601111214-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20220601111214-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20220601111214-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220601111214-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220601111214-9404: (7.918929s)
--- FAIL: TestScheduledStopWindows (86.11s)

                                                
                                    
x
+
TestInsufficientStorage (29.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220601111340-9404 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220601111340-9404 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (18.3654184s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75e3b5b7-12cb-4716-acd3-38601d6d3393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220601111340-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a2a2eb7-a33c-4818-b099-eda560b8d4c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3e89f0f0-c7ae-4a94-83ff-0d0c77690d7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"03ab7171-b244-413a-8719-56faf856dc81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"4ffa851a-c75f-4d56-9f39-bce4dfd4fc3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d406a901-eec8-4713-bc2b-4da4d04ee733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cdab863f-3ec1-4cd2-99d5-79a6cedc8833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4e8cc3b7-dc39-4373-b0c7-abaa8302db3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f78f4a2-c768-443a-9933-da9ad5a92294","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"ecd2b892-10ad-4595-9588-31b78eb07378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220601111340-9404 in cluster insufficient-storage-20220601111340-9404","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"08d08c52-b2d5-4f48-911c-571bdd26e195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c4d92a-e3f0-4c3d-a9fe-5e1259e82044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1be71d93-7e89-484d-b7a6-39dba8b9c6b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network insufficient-storage-20220601111340-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220601111340-9404: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network e30a18d6ad4857c74501f5abee4b7b6a9a21c1fb5e79fbe9cb89f7aff8c66ef9 (br-e30a18d6ad48): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4"}}
	{"specversion":"1.0","id":"6657d9f8-d62b-49e8-b522-29603f150dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:13:54.962810    1484 network_create.go:104] error while trying to create docker network insufficient-storage-20220601111340-9404 192.168.49.0/24: create docker network insufficient-storage-20220601111340-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true insufficient-storage-20220601111340-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e30a18d6ad4857c74501f5abee4b7b6a9a21c1fb5e79fbe9cb89f7aff8c66ef9 (br-e30a18d6ad48): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220601111340-9404 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220601111340-9404 --output=json --layout=cluster: exit status 7 (2.7704778s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601111340-9404","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20220601111340-9404","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:14:01.844587    1100 status.go:258] status error: host: state: unknown state "insufficient-storage-20220601111340-9404": docker container inspect insufficient-storage-20220601111340-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20220601111340-9404
	E0601 11:14:01.844587    1100 status.go:261] The "insufficient-storage-20220601111340-9404" host does not exist!

                                                
                                                
** /stderr **
status_test.go:98: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20220601111340-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220601111340-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220601111340-9404: (8.2091536s)
--- FAIL: TestInsufficientStorage (29.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (343.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (54.634035s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2157188067
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "running-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220601111410-9404", then "minikube start -p running-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 465.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (1m53.6740056s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1693415335
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "running-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220601111410-9404", then "minikube start -p running-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2756299099.exe start -p running-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (2m38.7231327s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3264688797
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "running-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220601111410-9404", then "minikube start -p running-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220601111410-9404 container: output Error response from daemon: create running-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/running-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-01 11:19:41.245951 +0000 GMT m=+3391.507331301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220601111410-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20220601111410-9404: exit status 1 (1.167216s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220601111410-9404 -n running-upgrade-20220601111410-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220601111410-9404 -n running-upgrade-20220601111410-9404: exit status 7 (2.9875157s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:19:45.379079    6736 status.go:247] status error: host: state: unknown state "running-upgrade-20220601111410-9404": docker container inspect running-upgrade-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: running-upgrade-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20220601111410-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20220601111410-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220601111410-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220601111410-9404: (8.3363462s)
--- FAIL: TestRunningBinaryUpgrade (343.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (113.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601111922-9404 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601111922-9404 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60 (1m17.4776893s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601111922-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220601111922-9404 in cluster kubernetes-upgrade-20220601111922-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220601111922-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:19:22.605131    2016 out.go:296] Setting OutFile to fd 1488 ...
	I0601 11:19:22.681029    2016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:22.681029    2016 out.go:309] Setting ErrFile to fd 1484...
	I0601 11:19:22.681029    2016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:22.693753    2016 out.go:303] Setting JSON to false
	I0601 11:19:22.696415    2016 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14298,"bootTime":1654068064,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:19:22.696415    2016 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:19:22.703415    2016 out.go:177] * [kubernetes-upgrade-20220601111922-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:19:22.707209    2016 notify.go:193] Checking for updates...
	I0601 11:19:22.711710    2016 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:19:22.714915    2016 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:19:22.717151    2016 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:22.719780    2016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:22.724807    2016 config.go:178] Loaded profile config "missing-upgrade-20220601111541-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:22.725472    2016 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:19:22.725472    2016 config.go:178] Loaded profile config "running-upgrade-20220601111410-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:22.726137    2016 config.go:178] Loaded profile config "stopped-upgrade-20220601111410-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:22.726137    2016 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:25.341172    2016 docker.go:137] docker version: linux-20.10.14
	I0601 11:19:25.347210    2016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:27.371658    2016 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0242394s)
	I0601 11:19:27.372708    2016 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:19:26.355499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:27.376596    2016 out.go:177] * Using the docker driver based on user configuration
	I0601 11:19:27.379733    2016 start.go:284] selected driver: docker
	I0601 11:19:27.379733    2016 start.go:806] validating driver "docker" against <nil>
	I0601 11:19:27.379733    2016 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:27.500312    2016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:29.567551    2016 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0672164s)
	I0601 11:19:29.567551    2016 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:19:28.5097371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:29.568241    2016 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:19:29.569027    2016 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:19:29.574434    2016 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:19:29.576815    2016 cni.go:95] Creating CNI manager for ""
	I0601 11:19:29.576815    2016 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:19:29.576815    2016 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220601111922-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601111922-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:19:29.579180    2016 out.go:177] * Starting control plane node kubernetes-upgrade-20220601111922-9404 in cluster kubernetes-upgrade-20220601111922-9404
	I0601 11:19:29.583835    2016 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:19:29.586903    2016 out.go:177] * Pulling base image ...
	I0601 11:19:29.589309    2016 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:19:29.589309    2016 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:19:29.589309    2016 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:19:29.589309    2016 cache.go:57] Caching tarball of preloaded images
	I0601 11:19:29.589309    2016 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:19:29.589309    2016 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:19:29.590367    2016 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220601111922-9404\config.json ...
	I0601 11:19:29.590765    2016 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220601111922-9404\config.json: {Name:mk5bec30db9ee7245c16753d127eeb0d92c6e0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:19:30.661878    2016 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:19:30.662047    2016 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:19:30.662569    2016 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:19:30.662569    2016 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:19:30.662740    2016 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:19:30.662740    2016 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:19:30.662949    2016 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:19:30.662949    2016 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:19:30.663003    2016 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:19:32.966666    2016 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:19:32.966666    2016 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:19:32.966666    2016 start.go:352] acquiring machines lock for kubernetes-upgrade-20220601111922-9404: {Name:mk7cbdca015726d23b5f40cca98cec3e2ce2a13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:32.967195    2016 start.go:356] acquired machines lock for "kubernetes-upgrade-20220601111922-9404" in 528.9µs
	I0601 11:19:32.967390    2016 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220601111922-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601111922
-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:19:32.967650    2016 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:19:32.973712    2016 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:19:32.973926    2016 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220601111922-9404" (driver="docker")
	I0601 11:19:32.973926    2016 client.go:168] LocalClient.Create starting
	I0601 11:19:32.974913    2016 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:19:32.975206    2016 main.go:134] libmachine: Decoding PEM data...
	I0601 11:19:32.975206    2016 main.go:134] libmachine: Parsing certificate...
	I0601 11:19:32.975206    2016 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:19:32.975206    2016 main.go:134] libmachine: Decoding PEM data...
	I0601 11:19:32.975206    2016 main.go:134] libmachine: Parsing certificate...
	I0601 11:19:32.987376    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:19:34.079003    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:19:34.079003    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0915033s)
	I0601 11:19:34.087235    2016 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220601111922-9404] to gather additional debugging logs...
	I0601 11:19:34.087235    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404
	W0601 11:19:35.150567    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:35.150629    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404: (1.0631228s)
	I0601 11:19:35.150697    2016 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220601111922-9404]: docker network inspect kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:35.150767    2016 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220601111922-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	
	** /stderr **
	I0601 11:19:35.157462    2016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:19:36.247448    2016 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.089973s)
	I0601 11:19:36.269188    2016 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000594710] misses:0}
	I0601 11:19:36.269536    2016 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:19:36.269536    2016 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220601111922-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:19:36.276468    2016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404
	W0601 11:19:37.352346    2016 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:37.352346    2016 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: (1.0758661s)
	E0601 11:19:37.352346    2016 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220601111922-9404 192.168.49.0/24: create docker network kubernetes-upgrade-20220601111922-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee0c28f9ac3b7bc7d98fa3700c2aa99ad6ede718981e696fc681c96a51ed4797 (br-ee0c28f9ac3b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:19:37.352346    2016 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220601111922-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee0c28f9ac3b7bc7d98fa3700c2aa99ad6ede718981e696fc681c96a51ed4797 (br-ee0c28f9ac3b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220601111922-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee0c28f9ac3b7bc7d98fa3700c2aa99ad6ede718981e696fc681c96a51ed4797 (br-ee0c28f9ac3b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:19:37.367121    2016 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:19:38.451277    2016 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0839346s)
	I0601 11:19:38.458515    2016 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:19:39.531820    2016 cli_runner.go:211] docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:19:39.531820    2016 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0731446s)
	I0601 11:19:39.532113    2016 client.go:171] LocalClient.Create took 6.5581121s
	I0601 11:19:41.545847    2016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:19:41.552522    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:19:42.655185    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:42.655185    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.1026505s)
	I0601 11:19:42.655185    2016 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:42.948055    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:19:44.035696    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:44.035696    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0876292s)
	W0601 11:19:44.035696    2016 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:19:44.035696    2016 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:44.046318    2016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:19:44.053027    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:19:45.123745    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:45.123797    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0706711s)
	I0601 11:19:45.123797    2016 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:45.434926    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:19:46.520347    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:46.520347    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0847318s)
	W0601 11:19:46.520347    2016 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:19:46.520347    2016 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:46.520347    2016 start.go:134] duration metric: createHost completed in 13.5525419s
	I0601 11:19:46.520347    2016 start.go:81] releasing machines lock for "kubernetes-upgrade-20220601111922-9404", held for 13.552997s
	W0601 11:19:46.520347    2016 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	I0601 11:19:46.540873    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:47.613627    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:47.613627    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0727412s)
	I0601 11:19:47.613627    2016 delete.go:82] Unable to get host status for kubernetes-upgrade-20220601111922-9404, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	W0601 11:19:47.614159    2016 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	I0601 11:19:47.614319    2016 start.go:614] Will try again in 5 seconds ...
	I0601 11:19:52.625433    2016 start.go:352] acquiring machines lock for kubernetes-upgrade-20220601111922-9404: {Name:mk7cbdca015726d23b5f40cca98cec3e2ce2a13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:19:52.625812    2016 start.go:356] acquired machines lock for "kubernetes-upgrade-20220601111922-9404" in 188.8µs
	I0601 11:19:52.625993    2016 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:19:52.626030    2016 fix.go:55] fixHost starting: 
	I0601 11:19:52.645621    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:53.699341    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:53.699341    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0537087s)
	I0601 11:19:53.699341    2016 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220601111922-9404: state= err=unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:53.699341    2016 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:19:53.702328    2016 out.go:177] * docker "kubernetes-upgrade-20220601111922-9404" container is missing, will recreate.
	I0601 11:19:53.706314    2016 delete.go:124] DEMOLISHING kubernetes-upgrade-20220601111922-9404 ...
	I0601 11:19:53.720325    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:54.810289    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:54.810289    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0899514s)
	W0601 11:19:54.810289    2016 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:54.810289    2016 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:54.824749    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:55.905005    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:55.905227    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0802079s)
	I0601 11:19:55.905227    2016 delete.go:82] Unable to get host status for kubernetes-upgrade-20220601111922-9404, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:55.912955    2016 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404
	W0601 11:19:57.024548    2016 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:57.024548    2016 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404: (1.1114749s)
	I0601 11:19:57.024548    2016 kic.go:356] could not find the container kubernetes-upgrade-20220601111922-9404 to remove it. will try anyways
	I0601 11:19:57.030397    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:58.125196    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:58.125196    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0947863s)
	W0601 11:19:58.125196    2016 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:58.132964    2016 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0"
	W0601 11:19:59.240636    2016 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:19:59.240636    2016 cli_runner.go:217] Completed: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0": (1.107323s)
	I0601 11:19:59.240636    2016 oci.go:625] error shutdown kubernetes-upgrade-20220601111922-9404: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:00.258115    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:01.335366    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:01.335366    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0763647s)
	I0601 11:20:01.335366    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:01.335366    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:01.335366    2016 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:01.819562    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:02.895176    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:02.895176    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.075601s)
	I0601 11:20:02.895176    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:02.895176    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:02.895176    2016 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:03.803631    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:04.889824    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:04.889897    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0858913s)
	I0601 11:20:04.889974    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:04.889974    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:04.890051    2016 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:05.538552    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:06.633571    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:06.633646    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0948519s)
	I0601 11:20:06.633715    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:06.633715    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:06.633781    2016 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:07.754011    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:08.900516    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:08.900516    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.1463531s)
	I0601 11:20:08.900516    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:08.900516    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:08.900516    2016 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:10.434476    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:11.543981    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:11.543981    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.1091635s)
	I0601 11:20:11.544120    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:11.544120    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:11.544120    2016 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:14.598329    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:15.712148    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:15.712148    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.1138064s)
	I0601 11:20:15.712148    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:15.712148    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:15.712148    2016 oci.go:88] couldn't shut down kubernetes-upgrade-20220601111922-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	 
	I0601 11:20:15.719138    2016 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220601111922-9404
	I0601 11:20:16.847447    2016 cli_runner.go:217] Completed: docker rm -f -v kubernetes-upgrade-20220601111922-9404: (1.1282961s)
	I0601 11:20:16.855012    2016 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404
	W0601 11:20:17.918333    2016 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:17.918333    2016 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404: (1.0627785s)
	I0601 11:20:17.924319    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:19.088082    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:19.088082    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1637502s)
	I0601 11:20:19.094073    2016 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220601111922-9404] to gather additional debugging logs...
	I0601 11:20:19.094073    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404
	W0601 11:20:20.167805    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:20.167805    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404: (1.0737198s)
	I0601 11:20:20.167805    2016 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220601111922-9404]: docker network inspect kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:20.167805    2016 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220601111922-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	
	** /stderr **
	W0601 11:20:20.168831    2016 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:20:20.168831    2016 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:20:21.173925    2016 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:20:21.177030    2016 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:20:21.177930    2016 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220601111922-9404" (driver="docker")
	I0601 11:20:21.177930    2016 client.go:168] LocalClient.Create starting
	I0601 11:20:21.178071    2016 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:20:21.178071    2016 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:21.178071    2016 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:21.178734    2016 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:20:21.178881    2016 main.go:134] libmachine: Decoding PEM data...
	I0601 11:20:21.178881    2016 main.go:134] libmachine: Parsing certificate...
	I0601 11:20:21.186655    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:20:22.305377    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:20:22.305377    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1187092s)
	I0601 11:20:22.311412    2016 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220601111922-9404] to gather additional debugging logs...
	I0601 11:20:22.311412    2016 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601111922-9404
	W0601 11:20:23.358778    2016 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:23.358778    2016 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220601111922-9404: (1.0473538s)
	I0601 11:20:23.358778    2016 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220601111922-9404]: docker network inspect kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:23.358778    2016 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220601111922-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220601111922-9404
	
	** /stderr **
	I0601 11:20:23.367047    2016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:20:24.500591    2016 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1335305s)
	I0601 11:20:24.518593    2016 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594710] amended:false}} dirty:map[] misses:0}
	I0601 11:20:24.518593    2016 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:24.536203    2016 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000594710] amended:true}} dirty:map[192.168.49.0:0xc000594710 192.168.58.0:0xc000006af8] misses:0}
	I0601 11:20:24.536258    2016 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:20:24.536258    2016 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220601111922-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:20:24.543151    2016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404
	W0601 11:20:25.630510    2016 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:25.630510    2016 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: (1.0873465s)
	E0601 11:20:25.630510    2016 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220601111922-9404 192.168.58.0/24: create docker network kubernetes-upgrade-20220601111922-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0f44e5d01bbf3635435c10fc4e3d6c3ffe72bfb2ac6b04dbfecb2e8884a382e0 (br-0f44e5d01bbf): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:20:25.630510    2016 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220601111922-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0f44e5d01bbf3635435c10fc4e3d6c3ffe72bfb2ac6b04dbfecb2e8884a382e0 (br-0f44e5d01bbf): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220601111922-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0f44e5d01bbf3635435c10fc4e3d6c3ffe72bfb2ac6b04dbfecb2e8884a382e0 (br-0f44e5d01bbf): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:20:25.644475    2016 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:20:26.764085    2016 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1195445s)
	I0601 11:20:26.771778    2016 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:20:27.924232    2016 cli_runner.go:211] docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:20:27.924232    2016 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: (1.152441s)
	I0601 11:20:27.924232    2016 client.go:171] LocalClient.Create took 6.7462257s
	I0601 11:20:29.946369    2016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:20:29.953371    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:31.092067    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:31.092185    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.1386829s)
	I0601 11:20:31.092185    2016 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:31.437315    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:32.509428    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:32.509459    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0718951s)
	W0601 11:20:32.509911    2016 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:20:32.510047    2016 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:32.522306    2016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:20:32.528307    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:33.592425    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:33.592425    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0641062s)
	I0601 11:20:33.592425    2016 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:33.834321    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:34.955956    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:34.956187    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.1216218s)
	W0601 11:20:34.956341    2016 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:20:34.956341    2016 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:34.956341    2016 start.go:134] duration metric: createHost completed in 13.782075s
	I0601 11:20:34.966688    2016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:20:34.973493    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:36.040753    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:36.040753    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0672481s)
	I0601 11:20:36.040753    2016 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:36.298115    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:37.400041    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:37.400041    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.1019135s)
	W0601 11:20:37.400041    2016 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:20:37.400041    2016 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:37.411045    2016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:20:37.417072    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:38.504779    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:38.504779    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0876942s)
	I0601 11:20:38.504779    2016 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:38.717338    2016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404
	W0601 11:20:39.803818    2016 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:20:39.803818    2016 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: (1.0864673s)
	W0601 11:20:39.803818    2016 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	W0601 11:20:39.803818    2016 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:39.803818    2016 fix.go:57] fixHost completed within 47.1772515s
	I0601 11:20:39.803818    2016 start.go:81] releasing machines lock for "kubernetes-upgrade-20220601111922-9404", held for 47.1774249s
	W0601 11:20:39.803818    2016 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220601111922-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220601111922-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	I0601 11:20:39.808858    2016 out.go:177] 
	W0601 11:20:39.810835    2016 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220601111922-9404 container: docker volume create kubernetes-upgrade-20220601111922-9404 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601111922-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220601111922-9404: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220601111922-9404: read-only file system
	
	W0601 11:20:39.810835    2016 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:20:39.810835    2016 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:20:39.814831    2016 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220601111922-9404 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220601111922-9404
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220601111922-9404: exit status 82 (23.0826571s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	* Stopping node "kubernetes-upgrade-20220601111922-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:20:45.652279    4424 daemonize_windows.go:38] error terminating scheduled stop for profile kubernetes-upgrade-20220601111922-9404: stopping schedule-stop service for profile kubernetes-upgrade-20220601111922-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220601111922-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601111922-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20220601111922-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220601111922-9404 failed: exit status 82
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-06-01 11:21:03.0226085 +0000 GMT m=+3473.283058201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220601111922-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20220601111922-9404: exit status 1 (1.2047292s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: kubernetes-upgrade-20220601111922-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220601111922-9404 -n kubernetes-upgrade-20220601111922-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220601111922-9404 -n kubernetes-upgrade-20220601111922-9404: exit status 7 (2.9539427s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:21:07.163212    1100 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220601111922-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601111922-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220601111922-9404

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220601111922-9404: (8.6732637s)
--- FAIL: TestKubernetesUpgrade (113.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (376.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker: exit status 78 (2m11.5507873s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601111541-9404] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220601111541-9404
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220601111541-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220601111541-9404" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker: exit status 78 (2m38.4156637s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601111541-9404] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601111541-9404
	* Pulling base image ...
	* docker "missing-upgrade-20220601111541-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220601111541-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220601111541-9404" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.15863729.exe start -p missing-upgrade-20220601111541-9404 --memory=2200 --driver=docker: exit status 78 (1m10.8797351s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601111541-9404] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601111541-9404
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "missing-upgrade-20220601111541-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220601111541-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 107.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 276.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 383.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 507.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220601111541-9404" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220601111541-9404 container: output Error response from daemon: create missing-upgrade-20220601111541-9404: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220601111541-9404': mkdir /var/lib/docker/volumes/missing-upgrade-20220601111541-9404: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 78
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-01 11:21:45.3033576 +0000 GMT m=+3515.563325301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220601111541-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20220601111541-9404: exit status 1 (1.0943226s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: missing-upgrade-20220601111541-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220601111541-9404 -n missing-upgrade-20220601111541-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220601111541-9404 -n missing-upgrade-20220601111541-9404: exit status 7 (2.929677s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:21:49.304726    7360 status.go:247] status error: host: state: unknown state "missing-upgrade-20220601111541-9404": docker container inspect missing-upgrade-20220601111541-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20220601111541-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20220601111541-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20220601111541-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220601111541-9404

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220601111541-9404: (8.5769436s)
--- FAIL: TestMissingContainerUpgrade (376.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (83.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --driver=docker: exit status 60 (1m19.4406418s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node NoKubernetes-20220601111410-9404 in cluster NoKubernetes-20220601111410-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:14:27.821215    7700 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 003c4b4d65f31b86887405c40f8b2f8a5ef97111deedbb2ae700fd32f69046a0 (br-003c4b4d65f3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 003c4b4d65f31b86887405c40f8b2f8a5ef97111deedbb2ae700fd32f69046a0 (br-003c4b4d65f3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	E0601 11:15:16.062692    7700 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 14da590417a977f17dbd060cbb4c2b11573559e2eb6d111ada316f573c38824a (br-14da590417a9): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 14da590417a977f17dbd060cbb4c2b11573559e2eb6d111ada316f573c38824a (br-14da590417a9): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220601111410-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220601111410-9404

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220601111410-9404: exit status 1 (1.1670022s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404: exit status 7 (2.8767297s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:15:34.045454    7016 status.go:247] status error: host: state: unknown state "NoKubernetes-20220601111410-9404": docker container inspect NoKubernetes-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220601111410-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (83.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (356.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (1m22.6629158s)

                                                
                                                
-- stdout --
	! [stopped-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig847114205
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220601111410-9404", then "minikube start -p stopped-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 57.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 472.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (1m51.6530121s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2353798207
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220601111410-9404", then "minikube start -p stopped-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 442.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 487.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 524.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.299688546.exe start -p stopped-upgrade-20220601111410-9404 --memory=2200 --vm-driver=docker: exit status 70 (2m39.3411146s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601111410-9404] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig1468591298
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220601111410-9404" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220601111410-9404", then "minikube start -p stopped-upgrade-20220601111410-9404 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 407.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220601111410-9404 container: output Error response from daemon: create stopped-upgrade-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220601111410-9404': mkdir /var/lib/docker/volumes/stopped-upgrade-20220601111410-9404: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (356.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (117.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker: exit status 60 (1m53.3308643s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220601111410-9404 in cluster NoKubernetes-20220601111410-9404
	* Pulling base image ...
	* docker "NoKubernetes-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:16:21.988197    4944 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fd0dd2e30a08e294ee5ba9f629f8c852f423c1ac86b9d73695d883b436568287 (br-fd0dd2e30a08): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network fd0dd2e30a08e294ee5ba9f629f8c852f423c1ac86b9d73695d883b436568287 (br-fd0dd2e30a08): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	E0601 11:17:13.566937    4944 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e39c0ef75db19c9ac0e0826871bbe06c4f07b46f9c4a794918a4a19df9a0e00c (br-e39c0ef75db1): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e39c0ef75db19c9ac0e0826871bbe06c4f07b46f9c4a794918a4a19df9a0e00c (br-e39c0ef75db1): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220601111410-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220601111410-9404

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220601111410-9404: exit status 1 (1.1561771s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404: exit status 7 (2.8792284s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:17:31.405924    6940 status.go:247] status error: host: state: unknown state "NoKubernetes-20220601111410-9404": docker container inspect NoKubernetes-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220601111410-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (117.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (102.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker: exit status 1 (1m38.6956864s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20220601111410-9404 in cluster NoKubernetes-20220601111410-9404
	* Pulling base image ...
	* docker "NoKubernetes-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220601111410-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:18:18.542715    9248 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f475ab8f1fa315f48baac9ad7323316be41039aa3c4d05f5a5d49dcd2b73f7c5 (br-f475ab8f1fa3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f475ab8f1fa315f48baac9ad7323316be41039aa3c4d05f5a5d49dcd2b73f7c5 (br-f475ab8f1fa3): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220601111410-9404 container: docker volume create NoKubernetes-20220601111410-9404 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601111410-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220601111410-9404: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220601111410-9404': mkdir /var/lib/docker/volumes/NoKubernetes-20220601111410-9404: read-only file system
	
	E0601 11:19:10.025257    9248 network_create.go:104] error while trying to create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 95df125b44483225fc1829fd0c2208471d55a5b211872377954a69555670fc05 (br-95df125b4448): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220601111410-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601111410-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 95df125b44483225fc1829fd0c2208471d55a5b211872377954a69555670fc05 (br-95df125b4448): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220601111410-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220601111410-9404: exit status 1 (1.1587468s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220601111410-9404 -n NoKubernetes-20220601111410-9404: exit status 7 (2.8608306s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:19:14.134304    5588 status.go:247] status error: host: state: unknown state "NoKubernetes-20220601111410-9404": docker container inspect NoKubernetes-20220601111410-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220601111410-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220601111410-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (102.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220601111410-9404
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220601111410-9404: exit status 80 (3.3782762s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                  Args                                  |                 Profile                  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| delete  | -p                                                                     | download-docker-20220601102408-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:24 GMT | 01 Jun 22 10:24 GMT |
	|         | download-docker-20220601102408-9404                                    |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | binary-mirror-20220601102453-9404        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:25 GMT | 01 Jun 22 10:25 GMT |
	|         | binary-mirror-20220601102453-9404                                      |                                          |                   |                |                     |                     |
	| delete  | -p addons-20220601102510-9404                                          | addons-20220601102510-9404               | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:26 GMT | 01 Jun 22 10:26 GMT |
	| delete  | -p nospam-20220601102633-9404                                          | nospam-20220601102633-9404               | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:29 GMT | 01 Jun 22 10:29 GMT |
	| cache   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add k8s.gcr.io/pause:3.1                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add k8s.gcr.io/pause:3.3                                         |                                          |                   |                |                     |                     |
	| cache   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache add                                                              |                                          |                   |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	|         | cache reload                                                           |                                          |                   |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                            | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                                         | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:33 GMT | 01 Jun 22 10:33 GMT |
	| config  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | config set cpus 2                                                      |                                          |                   |                |                     |                     |
	| config  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | config get cpus                                                        |                                          |                   |                |                     |                     |
	| config  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | config unset cpus                                                      |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404 image load --daemon                     | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220601102952-9404  |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| addons  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | addons list                                                            |                                          |                   |                |                     |                     |
	| addons  | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | addons list -o json                                                    |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404 image load --daemon                     | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220601102952-9404  |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	| image   | functional-20220601102952-9404 image save                              | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220601102952-9404  |                                          |                   |                |                     |                     |
	|         | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                                          |                   |                |                     |                     |
	| profile | list                                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	| image   | functional-20220601102952-9404 image rm                                | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220601102952-9404  |                                          |                   |                |                     |                     |
	| profile | list -l                                                                | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| profile | list -o json                                                           | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	| profile | list -o json --light                                                   | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls --format short                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls --format yaml                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:36 GMT |
	|         | image ls --format json                                                 |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404 image build -t                          | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:37 GMT |
	|         | localhost/my-image:functional-20220601102952-9404                      |                                          |                   |                |                     |                     |
	|         | testdata\build                                                         |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:36 GMT | 01 Jun 22 10:37 GMT |
	|         | image ls --format table                                                |                                          |                   |                |                     |                     |
	| image   | functional-20220601102952-9404                                         | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:37 GMT | 01 Jun 22 10:37 GMT |
	|         | image ls                                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | functional-20220601102952-9404           | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:41 GMT | 01 Jun 22 10:42 GMT |
	|         | functional-20220601102952-9404                                         |                                          |                   |                |                     |                     |
	| addons  | ingress-addon-legacy-20220601104200-9404                               | ingress-addon-legacy-20220601104200-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:43 GMT | 01 Jun 22 10:43 GMT |
	|         | addons enable ingress-dns                                              |                                          |                   |                |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | ingress-addon-legacy-20220601104200-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:43 GMT | 01 Jun 22 10:43 GMT |
	|         | ingress-addon-legacy-20220601104200-9404                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-20220601104339-9404          | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:45 GMT | 01 Jun 22 10:45 GMT |
	|         | json-output-20220601104339-9404                                        |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | json-output-error-20220601104530-9404    | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:45 GMT | 01 Jun 22 10:45 GMT |
	|         | json-output-error-20220601104530-9404                                  |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220601104537-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:45 GMT | 01 Jun 22 10:48 GMT |
	|         | docker-network-20220601104537-9404                                     |                                          |                   |                |                     |                     |
	|         | --network=                                                             |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220601104537-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:48 GMT | 01 Jun 22 10:49 GMT |
	|         | docker-network-20220601104537-9404                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | docker-network-20220601104938-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:49 GMT | 01 Jun 22 10:52 GMT |
	|         | docker-network-20220601104938-9404                                     |                                          |                   |                |                     |                     |
	|         | --network=bridge                                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | docker-network-20220601104938-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:52 GMT | 01 Jun 22 10:53 GMT |
	|         | docker-network-20220601104938-9404                                     |                                          |                   |                |                     |                     |
	| start   | -p                                                                     | custom-subnet-20220601105331-9404        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:53 GMT | 01 Jun 22 10:56 GMT |
	|         | custom-subnet-20220601105331-9404                                      |                                          |                   |                |                     |                     |
	|         | --subnet=192.168.60.0/24                                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | custom-subnet-20220601105331-9404        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:56 GMT | 01 Jun 22 10:57 GMT |
	|         | custom-subnet-20220601105331-9404                                      |                                          |                   |                |                     |                     |
	| delete  | -p second-20220601105728-9404                                          | second-20220601105728-9404               | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 GMT | 01 Jun 22 10:58 GMT |
	| delete  | -p first-20220601105728-9404                                           | first-20220601105728-9404                | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 10:58 GMT | 01 Jun 22 10:59 GMT |
	| delete  | -p                                                                     | mount-start-2-20220601105903-9404        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 GMT | 01 Jun 22 11:00 GMT |
	|         | mount-start-2-20220601105903-9404                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | mount-start-1-20220601105903-9404        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:00 GMT | 01 Jun 22 11:00 GMT |
	|         | mount-start-1-20220601105903-9404                                      |                                          |                   |                |                     |                     |
	| profile | list --output json                                                     | minikube                                 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 GMT | 01 Jun 22 11:02 GMT |
	| delete  | -p                                                                     | multinode-20220601110036-9404-m02        | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:10 GMT | 01 Jun 22 11:10 GMT |
	|         | multinode-20220601110036-9404-m02                                      |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | multinode-20220601110036-9404            | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:10 GMT | 01 Jun 22 11:10 GMT |
	|         | multinode-20220601110036-9404                                          |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | test-preload-20220601111047-9404         | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:12 GMT | 01 Jun 22 11:12 GMT |
	|         | test-preload-20220601111047-9404                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | scheduled-stop-20220601111214-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:13 GMT | 01 Jun 22 11:13 GMT |
	|         | scheduled-stop-20220601111214-9404                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | insufficient-storage-20220601111340-9404 | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:14 GMT | 01 Jun 22 11:14 GMT |
	|         | insufficient-storage-20220601111340-9404                               |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | offline-docker-20220601111410-9404       | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:15 GMT | 01 Jun 22 11:15 GMT |
	|         | offline-docker-20220601111410-9404                                     |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | NoKubernetes-20220601111410-9404         | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 GMT | 01 Jun 22 11:19 GMT |
	|         | NoKubernetes-20220601111410-9404                                       |                                          |                   |                |                     |                     |
	| delete  | -p                                                                     | running-upgrade-20220601111410-9404      | minikube2\jenkins | v1.26.0-beta.1 | 01 Jun 22 11:19 GMT | 01 Jun 22 11:19 GMT |
	|         | running-upgrade-20220601111410-9404                                    |                                          |                   |                |                     |                     |
	|---------|------------------------------------------------------------------------|------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:19:54
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:19:54.009109    5312 out.go:296] Setting OutFile to fd 1744 ...
	I0601 11:19:54.068969    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:54.068969    5312 out.go:309] Setting ErrFile to fd 1756...
	I0601 11:19:54.068969    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:19:54.083212    5312 out.go:303] Setting JSON to false
	I0601 11:19:54.086385    5312 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14329,"bootTime":1654068065,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:19:54.087137    5312 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:19:54.091822    5312 out.go:177] * [force-systemd-flag-20220601111953-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:19:54.095103    5312 notify.go:193] Checking for updates...
	I0601 11:19:54.097366    5312 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:19:54.099938    5312 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:19:54.102017    5312 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:19:54.104869    5312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:19:53.706314    2016 delete.go:124] DEMOLISHING kubernetes-upgrade-20220601111922-9404 ...
	I0601 11:19:53.720325    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:54.810289    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:54.810289    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0899514s)
	W0601 11:19:54.810289    2016 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:54.810289    2016 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:54.824749    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:19:55.905005    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:55.905227    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0802079s)
	I0601 11:19:55.905227    2016 delete.go:82] Unable to get host status for kubernetes-upgrade-20220601111922-9404, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:55.912955    2016 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404
	W0601 11:19:57.024548    2016 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404 returned with exit code 1
	I0601 11:19:57.024548    2016 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220601111922-9404: (1.1114749s)
	I0601 11:19:57.024548    2016 kic.go:356] could not find the container kubernetes-upgrade-20220601111922-9404 to remove it. will try anyways
	I0601 11:19:57.030397    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	I0601 11:19:54.108229    5312 config.go:178] Loaded profile config "kubernetes-upgrade-20220601111922-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:19:54.109060    5312 config.go:178] Loaded profile config "missing-upgrade-20220601111541-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:54.109496    5312 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:19:54.109639    5312 config.go:178] Loaded profile config "stopped-upgrade-20220601111410-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 11:19:54.109639    5312 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:19:56.848962    5312 docker.go:137] docker version: linux-20.10.14
	I0601 11:19:56.856670    5312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:19:58.975744    5312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1190503s)
	I0601 11:19:58.976784    5312 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:19:57.9079355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:19:58.983935    5312 out.go:177] * Using the docker driver based on user configuration
	I0601 11:19:58.987521    5312 start.go:284] selected driver: docker
	I0601 11:19:58.987601    5312 start.go:806] validating driver "docker" against <nil>
	I0601 11:19:58.987627    5312 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:19:59.058166    5312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:20:01.150923    5312 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0926877s)
	I0601 11:20:01.151149    5312 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:20:00.1112696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:20:01.151149    5312 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:20:01.152120    5312 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:20:01.159218    5312 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:20:01.161134    5312 cni.go:95] Creating CNI manager for ""
	I0601 11:20:01.161134    5312 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:20:01.161134    5312 start_flags.go:306] config:
	{Name:force-systemd-flag-20220601111953-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220601111953-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:20:01.164760    5312 out.go:177] * Starting control plane node force-systemd-flag-20220601111953-9404 in cluster force-systemd-flag-20220601111953-9404
	I0601 11:20:01.166062    5312 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:20:01.168845    5312 out.go:177] * Pulling base image ...
	W0601 11:19:58.125196    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:19:58.125196    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0947863s)
	W0601 11:19:58.125196    2016 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:19:58.132964    2016 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0"
	W0601 11:19:59.240636    2016 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:19:59.240636    2016 cli_runner.go:217] Completed: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0": (1.107323s)
	I0601 11:19:59.240636    2016 oci.go:625] error shutdown kubernetes-upgrade-20220601111922-9404: docker exec --privileged -t kubernetes-upgrade-20220601111922-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:00.258115    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:01.335366    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:01.335366    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0763647s)
	I0601 11:20:01.335366    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:01.335366    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:01.335366    2016 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:01.819562    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	I0601 11:20:01.171146    5312 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:20:01.171146    5312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:20:01.171146    5312 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:20:01.171146    5312 cache.go:57] Caching tarball of preloaded images
	I0601 11:20:01.171146    5312 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:20:01.172722    5312 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:20:01.172941    5312 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220601111953-9404\config.json ...
	I0601 11:20:01.173201    5312 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220601111953-9404\config.json: {Name:mk7ec8e31fce3e65c5fd2707c7cc53c961947f3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:20:02.294327    5312 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:20:02.294383    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:02.294383    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:02.294383    5312 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:20:02.294383    5312 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:20:02.294909    5312 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:20:02.295043    5312 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:20:02.295138    5312 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:20:02.295138    5312 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:20:04.641008    5312 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-851108038: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-851108038: read-only file system"}
	I0601 11:20:04.641008    5312 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:20:04.641551    5312 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:20:04.641612    5312 start.go:352] acquiring machines lock for force-systemd-flag-20220601111953-9404: {Name:mkd2e1671bd667104ead68be88be376eded12c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:20:04.641961    5312 start.go:356] acquired machines lock for "force-systemd-flag-20220601111953-9404" in 243.4µs
	I0601 11:20:04.642298    5312 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220601111953-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:force-systemd-flag-20220601111953
-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:20:04.642401    5312 start.go:131] createHost starting for "" (driver="docker")
	W0601 11:20:02.895176    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:02.895176    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.075601s)
	I0601 11:20:02.895176    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:02.895176    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:02.895176    2016 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:03.803631    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:04.889824    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:04.889897    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0858913s)
	I0601 11:20:04.889974    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:04.889974    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:04.890051    2016 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:05.538552    2016 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}
	W0601 11:20:06.633571    2016 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:20:06.633646    2016 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: (1.0948519s)
	I0601 11:20:06.633715    2016 oci.go:637] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	I0601 11:20:06.633715    2016 oci.go:639] temporary error: container kubernetes-upgrade-20220601111922-9404 status is  but expect it to be exited
	I0601 11:20:06.633781    2016 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %!v(MISSING): unknown state "kubernetes-upgrade-20220601111922-9404": docker container inspect kubernetes-upgrade-20220601111922-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220601111922-9404
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20220601111410-9404": docker container inspect stopped-upgrade-20220601111410-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: stopped-upgrade-20220601111410-9404
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_746.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (3.39s)

                                                
                                    
x
+
TestPause/serial/Start (82.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220601112115-9404 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20220601112115-9404 --memory=2048 --install-addons=false --wait=all --driver=docker: exit status 60 (1m18.0946435s)

                                                
                                                
-- stdout --
	* [pause-20220601112115-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node pause-20220601112115-9404 in cluster pause-20220601112115-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20220601112115-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:21:31.325931    2572 network_create.go:104] error while trying to create docker network pause-20220601112115-9404 192.168.49.0/24: create docker network pause-20220601112115-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220601112115-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f464da535fd2bb6c592faeff1ab4cdb568cb6e6024085a3ba710acb1f1c88503 (br-f464da535fd2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220601112115-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220601112115-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f464da535fd2bb6c592faeff1ab4cdb568cb6e6024085a3ba710acb1f1c88503 (br-f464da535fd2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for pause-20220601112115-9404 container: docker volume create pause-20220601112115-9404 --label name.minikube.sigs.k8s.io=pause-20220601112115-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220601112115-9404: error while creating volume root path '/var/lib/docker/volumes/pause-20220601112115-9404': mkdir /var/lib/docker/volumes/pause-20220601112115-9404: read-only file system
	
	E0601 11:22:19.824575    2572 network_create.go:104] error while trying to create docker network pause-20220601112115-9404 192.168.58.0/24: create docker network pause-20220601112115-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220601112115-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eea1a9a197a6caa06e07667dfe67417fd8505d116ce0304ff305a71d6b87c693 (br-eea1a9a197a6): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220601112115-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220601112115-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eea1a9a197a6caa06e07667dfe67417fd8505d116ce0304ff305a71d6b87c693 (br-eea1a9a197a6): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p pause-20220601112115-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220601112115-9404 container: docker volume create pause-20220601112115-9404 --label name.minikube.sigs.k8s.io=pause-20220601112115-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220601112115-9404: error while creating volume root path '/var/lib/docker/volumes/pause-20220601112115-9404': mkdir /var/lib/docker/volumes/pause-20220601112115-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220601112115-9404 container: docker volume create pause-20220601112115-9404 --label name.minikube.sigs.k8s.io=pause-20220601112115-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220601112115-9404: error while creating volume root path '/var/lib/docker/volumes/pause-20220601112115-9404': mkdir /var/lib/docker/volumes/pause-20220601112115-9404: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20220601112115-9404 --memory=2048 --install-addons=false --wait=all --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220601112115-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20220601112115-9404: exit status 1 (1.1201784s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20220601112115-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601112115-9404 -n pause-20220601112115-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220601112115-9404 -n pause-20220601112115-9404: exit status 7 (2.9087459s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:22:37.970345    9052 status.go:247] status error: host: state: unknown state "pause-20220601112115-9404": docker container inspect pause-20220601112115-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220601112115-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20220601112115-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (82.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (81.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m17.1883655s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601112246-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220601112246-9404 in cluster old-k8s-version-20220601112246-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:22:46.597671    6976 out.go:296] Setting OutFile to fd 1864 ...
	I0601 11:22:46.656679    6976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:22:46.656679    6976 out.go:309] Setting ErrFile to fd 1868...
	I0601 11:22:46.656679    6976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:22:46.670514    6976 out.go:303] Setting JSON to false
	I0601 11:22:46.674497    6976 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14502,"bootTime":1654068064,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:22:46.674497    6976 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:22:46.679470    6976 out.go:177] * [old-k8s-version-20220601112246-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:22:46.683172    6976 notify.go:193] Checking for updates...
	I0601 11:22:46.685582    6976 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:22:46.689355    6976 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:22:46.692238    6976 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:22:46.694473    6976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:22:46.698587    6976 config.go:178] Loaded profile config "cert-expiration-20220601112128-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:22:46.698830    6976 config.go:178] Loaded profile config "cert-options-20220601112212-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:22:46.699476    6976 config.go:178] Loaded profile config "docker-flags-20220601112157-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:22:46.699476    6976 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:22:46.700221    6976 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:22:49.313993    6976 docker.go:137] docker version: linux-20.10.14
	I0601 11:22:49.321859    6976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:51.370861    6976 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0489781s)
	I0601 11:22:51.371663    6976 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:22:50.3490783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:51.381291    6976 out.go:177] * Using the docker driver based on user configuration
	I0601 11:22:51.383340    6976 start.go:284] selected driver: docker
	I0601 11:22:51.383340    6976 start.go:806] validating driver "docker" against <nil>
	I0601 11:22:51.383340    6976 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:22:51.454639    6976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:53.511961    6976 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0572983s)
	I0601 11:22:53.511961    6976 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:22:52.4733141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:53.512506    6976 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:22:53.512730    6976 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:22:53.517122    6976 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:22:53.519772    6976 cni.go:95] Creating CNI manager for ""
	I0601 11:22:53.520290    6976 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:22:53.520290    6976 start_flags.go:306] config:
	{Name:old-k8s-version-20220601112246-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601112246-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:22:53.522715    6976 out.go:177] * Starting control plane node old-k8s-version-20220601112246-9404 in cluster old-k8s-version-20220601112246-9404
	I0601 11:22:53.525838    6976 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:22:53.528345    6976 out.go:177] * Pulling base image ...
	I0601 11:22:53.531814    6976 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:22:53.531947    6976 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:22:53.532015    6976 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:22:53.532015    6976 cache.go:57] Caching tarball of preloaded images
	I0601 11:22:53.532015    6976 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:22:53.532671    6976 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:22:53.532818    6976 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220601112246-9404\config.json ...
	I0601 11:22:53.532818    6976 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220601112246-9404\config.json: {Name:mkf5b118dd70c4faabb963ec56ecb2a9d1a35dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:22:54.628301    6976 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:22:54.628437    6976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:54.628437    6976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:54.628437    6976 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:22:54.628961    6976 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:22:54.628961    6976 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:22:54.629021    6976 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:22:54.629021    6976 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:22:54.629021    6976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:22:56.934836    6976 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:22:56.934836    6976 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:22:56.934924    6976 start.go:352] acquiring machines lock for old-k8s-version-20220601112246-9404: {Name:mk41775024acf710d15af281ba02dfa90cd6ead3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:56.935185    6976 start.go:356] acquired machines lock for "old-k8s-version-20220601112246-9404" in 163.5µs
	I0601 11:22:56.935453    6976 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220601112246-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601112246-9404 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:22:56.935665    6976 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:22:56.941159    6976 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:22:56.941743    6976 start.go:165] libmachine.API.Create for "old-k8s-version-20220601112246-9404" (driver="docker")
	I0601 11:22:56.941743    6976 client.go:168] LocalClient.Create starting
	I0601 11:22:56.942389    6976 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:22:56.942389    6976 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:56.942389    6976 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:56.943047    6976 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:22:56.943215    6976 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:56.943215    6976 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:56.952903    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:22:58.001201    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:22:58.001201    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0482866s)
	I0601 11:22:58.008718    6976 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:22:58.008795    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:22:59.084531    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:22:59.084531    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0757238s)
	I0601 11:22:59.084531    6976 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:22:59.084531    6976 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	I0601 11:22:59.092354    6976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:23:00.160902    6976 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0679586s)
	I0601 11:23:00.180410    6976 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001265f8] misses:0}
	I0601 11:23:00.181286    6976 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:00.181286    6976 network_create.go:115] attempt to create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:23:00.187678    6976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404
	W0601 11:23:01.282968    6976 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:01.283048    6976 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: (1.095122s)
	E0601 11:23:01.283048    6976 network_create.go:104] error while trying to create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f37ed6af3a28aeac9ab555b869013952fb33979b0e8b16821448fdc25a4dd5f (br-4f37ed6af3a2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:23:01.283350    6976 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f37ed6af3a28aeac9ab555b869013952fb33979b0e8b16821448fdc25a4dd5f (br-4f37ed6af3a2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f37ed6af3a28aeac9ab555b869013952fb33979b0e8b16821448fdc25a4dd5f (br-4f37ed6af3a2): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:23:01.296440    6976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:23:02.433873    6976 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1372373s)
	I0601 11:23:02.441352    6976 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:23:03.537892    6976 cli_runner.go:211] docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:23:03.537892    6976 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0963545s)
	I0601 11:23:03.537974    6976 client.go:171] LocalClient.Create took 6.5961554s
	I0601 11:23:05.557438    6976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:05.563281    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:06.664552    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:06.664552    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1010542s)
	I0601 11:23:06.664552    6976 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:06.960324    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:08.026626    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:08.026763    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0662899s)
	W0601 11:23:08.026821    6976 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:23:08.026821    6976 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:08.037363    6976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:23:08.043828    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:09.104748    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:09.104748    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0609077s)
	I0601 11:23:09.104748    6976 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:09.411119    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:10.525094    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:10.525171    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1138611s)
	W0601 11:23:10.525432    6976 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:23:10.525543    6976 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:10.525576    6976 start.go:134] duration metric: createHost completed in 13.5897553s
	I0601 11:23:10.525593    6976 start.go:81] releasing machines lock for "old-k8s-version-20220601112246-9404", held for 13.5902519s
	W0601 11:23:10.525844    6976 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	I0601 11:23:10.541296    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:11.597106    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:11.597106    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0557981s)
	I0601 11:23:11.597106    6976 delete.go:82] Unable to get host status for old-k8s-version-20220601112246-9404, assuming it has already been deleted: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:23:11.597106    6976 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	I0601 11:23:11.597106    6976 start.go:614] Will try again in 5 seconds ...
	I0601 11:23:16.611433    6976 start.go:352] acquiring machines lock for old-k8s-version-20220601112246-9404: {Name:mk41775024acf710d15af281ba02dfa90cd6ead3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:16.611520    6976 start.go:356] acquired machines lock for "old-k8s-version-20220601112246-9404" in 58.2µs
	I0601 11:23:16.611520    6976 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:23:16.611520    6976 fix.go:55] fixHost starting: 
	I0601 11:23:16.625049    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:17.701944    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:17.701944    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0762181s)
	I0601 11:23:17.701944    6976 fix.go:103] recreateIfNeeded on old-k8s-version-20220601112246-9404: state= err=unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:17.701944    6976 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:23:17.706457    6976 out.go:177] * docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	I0601 11:23:17.709080    6976 delete.go:124] DEMOLISHING old-k8s-version-20220601112246-9404 ...
	I0601 11:23:17.722781    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:18.778318    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:18.778318    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0555257s)
	W0601 11:23:18.778318    6976 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:18.778318    6976 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:18.792079    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:19.859396    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:19.859509    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0670946s)
	I0601 11:23:19.859509    6976 delete.go:82] Unable to get host status for old-k8s-version-20220601112246-9404, assuming it has already been deleted: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:19.867451    6976 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:23:20.920459    6976 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:20.920459    6976 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.0528671s)
	I0601 11:23:20.920459    6976 kic.go:356] could not find the container old-k8s-version-20220601112246-9404 to remove it. will try anyways
	I0601 11:23:20.927157    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:22.012963    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:22.012963    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.085793s)
	W0601 11:23:22.012963    6976 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:22.020007    6976 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0"
	W0601 11:23:23.125283    6976 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:23:23.125283    6976 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": (1.1052629s)
	I0601 11:23:23.125283    6976 oci.go:625] error shutdown old-k8s-version-20220601112246-9404: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:24.147548    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:25.214455    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:25.214455    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0668944s)
	I0601 11:23:25.214455    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:25.214455    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:25.214455    6976 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:25.694388    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:26.780250    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:26.780250    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.085849s)
	I0601 11:23:26.780250    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:26.780250    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:26.780250    6976 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:27.693414    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:28.802277    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:28.802277    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1088498s)
	I0601 11:23:28.802277    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:28.802277    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:28.802277    6976 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:29.450172    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:30.519064    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:30.519064    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0688797s)
	I0601 11:23:30.519064    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:30.519064    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:30.519064    6976 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:31.645125    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:32.741481    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:32.741662    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0962281s)
	I0601 11:23:32.741786    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:32.741836    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:32.741871    6976 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:34.267955    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:35.409600    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:35.409710    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1415894s)
	I0601 11:23:35.409803    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:35.409837    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:35.409909    6976 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:38.469368    6976 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:23:39.545304    6976 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:23:39.545304    6976 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0759237s)
	I0601 11:23:39.545304    6976 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:39.545304    6976 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:23:39.545304    6976 oci.go:88] couldn't shut down old-k8s-version-20220601112246-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	 
	I0601 11:23:39.551320    6976 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220601112246-9404
	I0601 11:23:40.673530    6976 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220601112246-9404: (1.1220085s)
	I0601 11:23:40.680085    6976 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:23:41.782557    6976 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:41.782557    6976 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.1024592s)
	I0601 11:23:41.789545    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:23:42.946627    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:23:42.946627    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.157069s)
	I0601 11:23:42.954266    6976 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:23:42.954266    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:23:44.050928    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:44.050928    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0966495s)
	I0601 11:23:44.050928    6976 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:23:44.050928    6976 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	W0601 11:23:44.052619    6976 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:23:44.052619    6976 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:23:45.067163    6976 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:23:45.070842    6976 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:23:45.071656    6976 start.go:165] libmachine.API.Create for "old-k8s-version-20220601112246-9404" (driver="docker")
	I0601 11:23:45.071656    6976 client.go:168] LocalClient.Create starting
	I0601 11:23:45.071656    6976 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:23:45.072354    6976 main.go:134] libmachine: Decoding PEM data...
	I0601 11:23:45.072354    6976 main.go:134] libmachine: Parsing certificate...
	I0601 11:23:45.072354    6976 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:23:45.072354    6976 main.go:134] libmachine: Decoding PEM data...
	I0601 11:23:45.072354    6976 main.go:134] libmachine: Parsing certificate...
	I0601 11:23:45.080666    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:23:46.156797    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:23:46.156797    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0759152s)
	I0601 11:23:46.164679    6976 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:23:46.164909    6976 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:23:47.277736    6976 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:47.277736    6976 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.1128137s)
	I0601 11:23:47.277736    6976 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:23:47.277736    6976 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	I0601 11:23:47.285740    6976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:23:48.387860    6976 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.102107s)
	I0601 11:23:48.403901    6976 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001265f8] amended:false}} dirty:map[] misses:0}
	I0601 11:23:48.403901    6976 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:48.419899    6976 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001265f8] amended:true}} dirty:map[192.168.49.0:0xc0001265f8 192.168.58.0:0xc0007942a0] misses:0}
	I0601 11:23:48.419899    6976 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:48.419899    6976 network_create.go:115] attempt to create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:23:48.426898    6976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404
	W0601 11:23:49.488755    6976 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:49.488755    6976 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: (1.0618451s)
	E0601 11:23:49.488755    6976 network_create.go:104] error while trying to create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 933a5054a9b6b70c66851abe0e08458b068c05dc8bace3628548d36cba1dfc32 (br-933a5054a9b6): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:23:49.488755    6976 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 933a5054a9b6b70c66851abe0e08458b068c05dc8bace3628548d36cba1dfc32 (br-933a5054a9b6): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 933a5054a9b6b70c66851abe0e08458b068c05dc8bace3628548d36cba1dfc32 (br-933a5054a9b6): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:23:49.501750    6976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:23:50.589885    6976 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0881226s)
	I0601 11:23:50.595892    6976 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:23:51.690595    6976 cli_runner.go:211] docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:23:51.690664    6976 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0945248s)
	I0601 11:23:51.690664    6976 client.go:171] LocalClient.Create took 6.6189323s
	I0601 11:23:53.721496    6976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:53.728231    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:54.838278    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:54.838423    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.109884s)
	I0601 11:23:54.838552    6976 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:55.180300    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:56.286767    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:56.286767    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1064541s)
	W0601 11:23:56.286767    6976 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:23:56.286767    6976 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:56.297020    6976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:23:56.303053    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:57.386551    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:57.386551    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.083485s)
	I0601 11:23:57.386551    6976 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:57.630286    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:58.712288    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:58.712371    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0818385s)
	W0601 11:23:58.712726    6976 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:23:58.712758    6976 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:23:58.712804    6976 start.go:134] duration metric: createHost completed in 13.6454839s
	I0601 11:23:58.723960    6976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:58.730069    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:23:59.827037    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:23:59.827037    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.096956s)
	I0601 11:23:59.827303    6976 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:00.088620    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:01.175987    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:01.175987    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.086741s)
	W0601 11:24:01.175987    6976 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:24:01.175987    6976 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:01.186826    6976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:24:01.195403    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:02.267573    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:02.267776    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0721578s)
	I0601 11:24:02.267882    6976 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:02.482257    6976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:03.518355    6976 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:03.518355    6976 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0360859s)
	W0601 11:24:03.518355    6976 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:24:03.518355    6976 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:03.518355    6976 fix.go:57] fixHost completed within 46.906295s
	I0601 11:24:03.518355    6976 start.go:81] releasing machines lock for "old-k8s-version-20220601112246-9404", held for 46.906295s
	W0601 11:24:03.519297    6976 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220601112246-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220601112246-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	I0601 11:24:03.524748    6976 out.go:177] 
	W0601 11:24:03.527177    6976 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	W0601 11:24:03.527177    6976 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:24:03.527177    6976 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:24:03.530673    6976 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1156391s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9472989s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:07.686308    4584 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (81.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.1196133s)

                                                
                                                
-- stdout --
	* [no-preload-20220601112334-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node no-preload-20220601112334-9404 in cluster no-preload-20220601112334-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220601112334-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:23:34.811943    5980 out.go:296] Setting OutFile to fd 1352 ...
	I0601 11:23:34.884791    5980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:23:34.884872    5980 out.go:309] Setting ErrFile to fd 788...
	I0601 11:23:34.884872    5980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:23:34.896463    5980 out.go:303] Setting JSON to false
	I0601 11:23:34.898464    5980 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14550,"bootTime":1654068064,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:23:34.898464    5980 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:23:34.906465    5980 out.go:177] * [no-preload-20220601112334-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:23:34.910007    5980 notify.go:193] Checking for updates...
	I0601 11:23:34.912010    5980 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:23:34.915114    5980 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:23:34.917140    5980 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:23:34.920128    5980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:23:34.924108    5980 config.go:178] Loaded profile config "cert-expiration-20220601112128-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:34.924108    5980 config.go:178] Loaded profile config "cert-options-20220601112212-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:34.924108    5980 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:34.925108    5980 config.go:178] Loaded profile config "old-k8s-version-20220601112246-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:23:34.925108    5980 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:23:37.520594    5980 docker.go:137] docker version: linux-20.10.14
	I0601 11:23:37.542322    5980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:23:39.632111    5980 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0897645s)
	I0601 11:23:39.632111    5980 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:23:38.5636035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:23:39.649110    5980 out.go:177] * Using the docker driver based on user configuration
	I0601 11:23:39.673218    5980 start.go:284] selected driver: docker
	I0601 11:23:39.673842    5980 start.go:806] validating driver "docker" against <nil>
	I0601 11:23:39.673842    5980 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:23:39.745759    5980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:23:41.892182    5980 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1463299s)
	I0601 11:23:41.892182    5980 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:23:40.7711243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:23:41.892182    5980 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:23:41.893191    5980 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:23:41.897194    5980 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:23:41.899188    5980 cni.go:95] Creating CNI manager for ""
	I0601 11:23:41.899188    5980 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:23:41.899188    5980 start_flags.go:306] config:
	{Name:no-preload-20220601112334-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601112334-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:23:41.901191    5980 out.go:177] * Starting control plane node no-preload-20220601112334-9404 in cluster no-preload-20220601112334-9404
	I0601 11:23:41.905188    5980 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:23:41.908182    5980 out.go:177] * Pulling base image ...
	I0601 11:23:41.910186    5980 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:23:41.910186    5980 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:23:41.910186    5980 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220601112334-9404\config.json ...
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0601 11:23:41.911192    5980 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220601112334-9404\config.json: {Name:mk2f1d80a796442751929c498ed658b4382c4fc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0601 11:23:41.911192    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0601 11:23:42.111360    5980 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 200.1661ms
	I0601 11:23:42.111360    5980 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0601 11:23:42.111360    5980 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 200.1661ms
	I0601 11:23:42.111360    5980 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 200.1661ms
	I0601 11:23:42.111360    5980 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 200.1661ms
	I0601 11:23:42.111360    5980 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0601 11:23:42.111360    5980 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 200.1661ms
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0601 11:23:42.111360    5980 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0601 11:23:42.112370    5980 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 201.1762ms
	I0601 11:23:42.112370    5980 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0601 11:23:42.111360    5980 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0601 11:23:42.111360    5980 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0601 11:23:42.111360    5980 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0601 11:23:42.112370    5980 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 201.1762ms
	I0601 11:23:42.112370    5980 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0601 11:23:42.112370    5980 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 201.1762ms
	I0601 11:23:42.112370    5980 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0601 11:23:42.112370    5980 cache.go:87] Successfully saved all images to host disk.
	I0601 11:23:43.102078    5980 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:23:43.102078    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:23:43.102078    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:23:43.102078    5980 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:23:43.102078    5980 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:23:43.102615    5980 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:23:43.102814    5980 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:23:43.102814    5980 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:23:43.102814    5980 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:23:45.434993    5980 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:23:45.435074    5980 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:23:45.435074    5980 start.go:352] acquiring machines lock for no-preload-20220601112334-9404: {Name:mk28c43b16c7470d23bc1a71d3a7541a869ef61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:23:45.435074    5980 start.go:356] acquired machines lock for "no-preload-20220601112334-9404" in 0s
	I0601 11:23:45.435074    5980 start.go:91] Provisioning new machine with config: &{Name:no-preload-20220601112334-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601112334-9404 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:23:45.435617    5980 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:23:45.446704    5980 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:23:45.448207    5980 start.go:165] libmachine.API.Create for "no-preload-20220601112334-9404" (driver="docker")
	I0601 11:23:45.448301    5980 client.go:168] LocalClient.Create starting
	I0601 11:23:45.448811    5980 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:23:45.449049    5980 main.go:134] libmachine: Decoding PEM data...
	I0601 11:23:45.449049    5980 main.go:134] libmachine: Parsing certificate...
	I0601 11:23:45.449049    5980 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:23:45.449049    5980 main.go:134] libmachine: Decoding PEM data...
	I0601 11:23:45.449049    5980 main.go:134] libmachine: Parsing certificate...
	I0601 11:23:45.457890    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:23:46.563773    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:23:46.563773    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1057313s)
	I0601 11:23:46.571996    5980 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:23:46.572529    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:23:47.680709    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:47.680911    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.1081668s)
	I0601 11:23:47.680971    5980 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:23:47.680971    5980 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	I0601 11:23:47.689958    5980 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:23:48.830019    5980 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1400473s)
	I0601 11:23:48.849774    5980 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e3d8] misses:0}
	I0601 11:23:48.849774    5980 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:23:48.850774    5980 network_create.go:115] attempt to create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:23:48.855941    5980 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404
	W0601 11:23:49.960177    5980 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:49.960177    5980 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: (1.1031697s)
	E0601 11:23:49.960177    5980 network_create.go:104] error while trying to create docker network no-preload-20220601112334-9404 192.168.49.0/24: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c40947040e1922e92723cf77f6cb5682472d48cbb2eef9a7e483bb46d9a42a00 (br-c40947040e19): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:23:49.960177    5980 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c40947040e1922e92723cf77f6cb5682472d48cbb2eef9a7e483bb46d9a42a00 (br-c40947040e19): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c40947040e1922e92723cf77f6cb5682472d48cbb2eef9a7e483bb46d9a42a00 (br-c40947040e19): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:23:49.973404    5980 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:23:51.092387    5980 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1188443s)
	I0601 11:23:51.098910    5980 cli_runner.go:164] Run: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:23:52.179144    5980 cli_runner.go:211] docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:23:52.179144    5980 cli_runner.go:217] Completed: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0800368s)
	I0601 11:23:52.179406    5980 client.go:171] LocalClient.Create took 6.7309832s
	I0601 11:23:54.199483    5980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:23:54.207327    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:23:55.327706    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:55.327751    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1196209s)
	I0601 11:23:55.328130    5980 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:23:55.616987    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:23:56.740890    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:56.740890    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1235727s)
	W0601 11:23:56.741129    5980 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:23:56.741163    5980 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:23:56.752292    5980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:23:56.759922    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:23:57.852532    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:57.852614    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0925056s)
	I0601 11:23:57.852750    5980 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:23:58.161279    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:23:59.255725    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:23:59.255797    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0942088s)
	W0601 11:23:59.255952    5980 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:23:59.255952    5980 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:23:59.255952    5980 start.go:134] duration metric: createHost completed in 13.8201756s
	I0601 11:23:59.255952    5980 start.go:81] releasing machines lock for "no-preload-20220601112334-9404", held for 13.8207188s
	W0601 11:23:59.255952    5980 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	I0601 11:23:59.271112    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:00.345857    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:00.345996    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0746517s)
	I0601 11:24:00.346080    5980 delete.go:82] Unable to get host status for no-preload-20220601112334-9404, assuming it has already been deleted: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:24:00.346426    5980 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	I0601 11:24:00.346426    5980 start.go:614] Will try again in 5 seconds ...
	I0601 11:24:05.358749    5980 start.go:352] acquiring machines lock for no-preload-20220601112334-9404: {Name:mk28c43b16c7470d23bc1a71d3a7541a869ef61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:24:05.359076    5980 start.go:356] acquired machines lock for "no-preload-20220601112334-9404" in 327.5µs
	I0601 11:24:05.359263    5980 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:24:05.359263    5980 fix.go:55] fixHost starting: 
	I0601 11:24:05.370642    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:06.509811    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:06.509811    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1391561s)
	I0601 11:24:06.509811    5980 fix.go:103] recreateIfNeeded on no-preload-20220601112334-9404: state= err=unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:06.509811    5980 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:24:06.518832    5980 out.go:177] * docker "no-preload-20220601112334-9404" container is missing, will recreate.
	I0601 11:24:06.522774    5980 delete.go:124] DEMOLISHING no-preload-20220601112334-9404 ...
	I0601 11:24:06.535953    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:07.654312    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:07.654312    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.118346s)
	W0601 11:24:07.654312    5980 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:07.654312    5980 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:07.668306    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:08.727909    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:08.728027    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0593736s)
	I0601 11:24:08.728101    5980 delete.go:82] Unable to get host status for no-preload-20220601112334-9404, assuming it has already been deleted: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:08.736122    5980 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:24:09.814619    5980 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:09.814619    5980 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.0783413s)
	I0601 11:24:09.814704    5980 kic.go:356] could not find the container no-preload-20220601112334-9404 to remove it. will try anyways
	I0601 11:24:09.820992    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:10.871700    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:10.871700    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0496881s)
	W0601 11:24:10.871863    5980 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:10.879874    5980 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0"
	W0601 11:24:11.958662    5980 cli_runner.go:211] docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:24:11.958662    5980 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": (1.0787757s)
	I0601 11:24:11.958662    5980 oci.go:625] error shutdown no-preload-20220601112334-9404: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:12.971114    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:14.044925    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:14.044925    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0735338s)
	I0601 11:24:14.044994    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:14.045058    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:14.045058    5980 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:14.528110    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:15.618180    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:15.618180    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0900574s)
	I0601 11:24:15.618180    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:15.618180    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:15.618180    5980 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:16.527231    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:17.581814    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:17.581854    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0543992s)
	I0601 11:24:17.582060    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:17.582060    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:17.582137    5980 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:18.240694    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:19.282922    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:19.282922    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.041672s)
	I0601 11:24:19.282922    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:19.282922    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:19.282922    5980 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:20.403469    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:21.463735    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:21.463792    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.060104s)
	I0601 11:24:21.463792    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:21.463792    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:21.463792    5980 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:22.983704    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:24.080806    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:24.080806    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0970886s)
	I0601 11:24:24.080806    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:24.080806    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:24.080806    5980 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:27.141357    5980 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:24:28.188725    5980 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:28.188725    5980 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.047155s)
	I0601 11:24:28.188981    5980 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:28.188981    5980 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:24:28.189060    5980 oci.go:88] couldn't shut down no-preload-20220601112334-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	 
	I0601 11:24:28.197743    5980 cli_runner.go:164] Run: docker rm -f -v no-preload-20220601112334-9404
	I0601 11:24:29.263447    5980 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220601112334-9404: (1.0655692s)
	I0601 11:24:29.271186    5980 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:24:30.350312    5980 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:30.350312    5980 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.078863s)
	I0601 11:24:30.357493    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:24:31.418448    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:24:31.418448    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0609424s)
	I0601 11:24:31.424828    5980 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:24:31.424828    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:24:32.469044    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:32.469101    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.0440183s)
	I0601 11:24:32.469101    5980 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:24:32.469101    5980 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	W0601 11:24:32.469762    5980 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:24:32.469762    5980 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:24:33.478262    5980 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:24:33.483309    5980 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:24:33.483377    5980 start.go:165] libmachine.API.Create for "no-preload-20220601112334-9404" (driver="docker")
	I0601 11:24:33.483377    5980 client.go:168] LocalClient.Create starting
	I0601 11:24:33.483948    5980 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:24:33.484085    5980 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:33.484085    5980 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:33.484085    5980 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:24:33.484085    5980 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:33.484085    5980 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:33.491896    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:24:34.567750    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:24:34.567750    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0757741s)
	I0601 11:24:34.574626    5980 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:24:34.574626    5980 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:24:35.616299    5980 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:35.616344    5980 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.0414521s)
	I0601 11:24:35.616387    5980 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:24:35.616465    5980 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	I0601 11:24:35.625549    5980 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:24:36.653230    5980 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0276698s)
	I0601 11:24:36.670296    5980 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060e3d8] amended:false}} dirty:map[] misses:0}
	I0601 11:24:36.671211    5980 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:24:36.689736    5980 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00060e3d8] amended:true}} dirty:map[192.168.49.0:0xc00060e3d8 192.168.58.0:0xc00060e580] misses:0}
	I0601 11:24:36.689854    5980 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:24:36.689854    5980 network_create.go:115] attempt to create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:24:36.697231    5980 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404
	W0601 11:24:37.772510    5980 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:37.772510    5980 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: (1.0752664s)
	E0601 11:24:37.772510    5980 network_create.go:104] error while trying to create docker network no-preload-20220601112334-9404 192.168.58.0/24: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e362b60dfc8901945ce7d6522d2203777029af739a322a0f3889c8eb7887fb64 (br-e362b60dfc89): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:24:37.772510    5980 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e362b60dfc8901945ce7d6522d2203777029af739a322a0f3889c8eb7887fb64 (br-e362b60dfc89): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e362b60dfc8901945ce7d6522d2203777029af739a322a0f3889c8eb7887fb64 (br-e362b60dfc89): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:24:37.784510    5980 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:24:38.839796    5980 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0551259s)
	I0601 11:24:38.845990    5980 cli_runner.go:164] Run: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:24:39.899894    5980 cli_runner.go:211] docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:24:39.899894    5980 cli_runner.go:217] Completed: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0538922s)
	I0601 11:24:39.899894    5980 client.go:171] LocalClient.Create took 6.4164438s
	I0601 11:24:41.913140    5980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:24:41.919156    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:42.960268    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:42.960397    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0410186s)
	I0601 11:24:42.960397    5980 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:43.302471    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:44.356840    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:44.356840    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0543573s)
	W0601 11:24:44.356840    5980 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:24:44.356840    5980 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:44.366776    5980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:24:44.372816    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:45.499970    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:45.500035    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1270554s)
	I0601 11:24:45.500212    5980 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:45.745269    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:46.816744    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:46.816744    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0714327s)
	W0601 11:24:46.816744    5980 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:24:46.816744    5980 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:46.816744    5980 start.go:134] duration metric: createHost completed in 13.3380786s
	I0601 11:24:46.827711    5980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:24:46.838759    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:47.906505    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:47.906505    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0677343s)
	I0601 11:24:47.906505    5980 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:48.164208    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:49.243034    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:49.243176    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0776793s)
	W0601 11:24:49.243176    5980 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:24:49.243176    5980 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:49.252786    5980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:24:49.259105    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:50.354016    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:50.354016    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0947577s)
	I0601 11:24:50.354016    5980 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:50.565694    5980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:24:51.648921    5980 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:24:51.648996    5980 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0826944s)
	W0601 11:24:51.648996    5980 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:24:51.648996    5980 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:24:51.648996    5980 fix.go:57] fixHost completed within 46.289202s
	I0601 11:24:51.648996    5980 start.go:81] releasing machines lock for "no-preload-20220601112334-9404", held for 46.2893885s
	W0601 11:24:51.649677    5980 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220601112334-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220601112334-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	I0601 11:24:51.661758    5980 out.go:177] 
	W0601 11:24:51.664109    5980 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	W0601 11:24:51.664109    5980 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:24:51.664109    5980 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:24:51.667823    5980 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1754088s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9741023s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:55.932495     772 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (81.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.084215s)

                                                
                                                
-- stdout --
	* [embed-certs-20220601112350-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node embed-certs-20220601112350-9404 in cluster embed-certs-20220601112350-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:23:50.775471    7004 out.go:296] Setting OutFile to fd 1628 ...
	I0601 11:23:50.829465    7004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:23:50.829465    7004 out.go:309] Setting ErrFile to fd 1512...
	I0601 11:23:50.829465    7004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:23:50.856373    7004 out.go:303] Setting JSON to false
	I0601 11:23:50.858545    7004 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14566,"bootTime":1654068064,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:23:50.858545    7004 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:23:50.876567    7004 out.go:177] * [embed-certs-20220601112350-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:23:50.881345    7004 notify.go:193] Checking for updates...
	I0601 11:23:50.885556    7004 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:23:50.888244    7004 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:23:50.890440    7004 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:23:50.892623    7004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:23:50.895743    7004 config.go:178] Loaded profile config "cert-expiration-20220601112128-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:50.895743    7004 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:50.896697    7004 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:23:50.896697    7004 config.go:178] Loaded profile config "old-k8s-version-20220601112246-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:23:50.896697    7004 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:23:53.526409    7004 docker.go:137] docker version: linux-20.10.14
	I0601 11:23:53.534769    7004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:23:55.685023    7004 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1502289s)
	I0601 11:23:55.685023    7004 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:23:54.6038185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:23:55.688719    7004 out.go:177] * Using the docker driver based on user configuration
	I0601 11:23:55.691351    7004 start.go:284] selected driver: docker
	I0601 11:23:55.691351    7004 start.go:806] validating driver "docker" against <nil>
	I0601 11:23:55.691351    7004 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:23:55.759784    7004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:23:57.852614    7004 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0926515s)
	I0601 11:23:57.852998    7004 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:23:56.801756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:23:57.853278    7004 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:23:57.853827    7004 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:23:57.856215    7004 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:23:57.858624    7004 cni.go:95] Creating CNI manager for ""
	I0601 11:23:57.858624    7004 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:23:57.858624    7004 start_flags.go:306] config:
	{Name:embed-certs-20220601112350-9404 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601112350-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:23:57.862122    7004 out.go:177] * Starting control plane node embed-certs-20220601112350-9404 in cluster embed-certs-20220601112350-9404
	I0601 11:23:57.863845    7004 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:23:57.866830    7004 out.go:177] * Pulling base image ...
	I0601 11:23:57.868813    7004 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:23:57.868813    7004 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:23:57.868813    7004 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:23:57.869768    7004 cache.go:57] Caching tarball of preloaded images
	I0601 11:23:57.869768    7004 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:23:57.869768    7004 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:23:57.869768    7004 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220601112350-9404\config.json ...
	I0601 11:23:57.869768    7004 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220601112350-9404\config.json: {Name:mk36b3042ab094c6aaec71e38734e2906a19eefc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:58.975558    7004 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:23:58.975737    7004 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:23:58.976036    7004 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:23:58.976036    7004 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:23:58.976174    7004 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:23:58.976245    7004 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:23:58.976457    7004 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:23:58.976457    7004 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:23:58.976457    7004 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:24:01.314819    7004 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:24:01.314994    7004 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:24:01.315180    7004 start.go:352] acquiring machines lock for embed-certs-20220601112350-9404: {Name:mkab52c380d7df2e54eb0e0135a3345b8a4ef27b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:24:01.315429    7004 start.go:356] acquired machines lock for "embed-certs-20220601112350-9404" in 225.9µs
	I0601 11:24:01.315429    7004 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220601112350-9404 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601112350-9404 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:24:01.315429    7004 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:24:01.319447    7004 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:24:01.320108    7004 start.go:165] libmachine.API.Create for "embed-certs-20220601112350-9404" (driver="docker")
	I0601 11:24:01.320108    7004 client.go:168] LocalClient.Create starting
	I0601 11:24:01.320108    7004 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:24:01.320774    7004 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:01.320774    7004 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:01.320774    7004 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:24:01.320774    7004 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:01.320774    7004 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:01.329250    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:24:02.423858    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:24:02.424007    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0944296s)
	I0601 11:24:02.430572    7004 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:24:02.430572    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:24:03.486315    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:03.486315    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.0557308s)
	I0601 11:24:03.486315    7004 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:24:03.486315    7004 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	I0601 11:24:03.492317    7004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:24:04.564638    7004 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0720152s)
	I0601 11:24:04.587779    7004 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c80050] misses:0}
	I0601 11:24:04.588391    7004 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:24:04.588391    7004 network_create.go:115] attempt to create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:24:04.595260    7004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404
	W0601 11:24:05.682145    7004 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:05.682145    7004 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: (1.0868725s)
	E0601 11:24:05.682145    7004 network_create.go:104] error while trying to create docker network embed-certs-20220601112350-9404 192.168.49.0/24: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aeb2d451b28b72ef730be554ccf3969de344b255357611cbe222cf9be54aa499 (br-aeb2d451b28b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:24:05.682145    7004 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aeb2d451b28b72ef730be554ccf3969de344b255357611cbe222cf9be54aa499 (br-aeb2d451b28b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network aeb2d451b28b72ef730be554ccf3969de344b255357611cbe222cf9be54aa499 (br-aeb2d451b28b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:24:05.696564    7004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:24:06.788059    7004 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0914825s)
	I0601 11:24:06.795823    7004 cli_runner.go:164] Run: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:24:07.936520    7004 cli_runner.go:211] docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:24:07.936520    7004 cli_runner.go:217] Completed: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1406836s)
	I0601 11:24:07.936520    7004 client.go:171] LocalClient.Create took 6.6163356s
	I0601 11:24:09.954226    7004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:24:09.960943    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:24:11.039750    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:11.039750    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0787944s)
	I0601 11:24:11.039750    7004 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:11.335398    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:24:12.392467    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:12.392467    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0570357s)
	W0601 11:24:12.392660    7004 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:24:12.392660    7004 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:12.402458    7004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:24:12.408379    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:24:13.461890    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:13.461890    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0529844s)
	I0601 11:24:13.461890    7004 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:13.773241    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:24:14.850403    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:14.850403    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0771487s)
	W0601 11:24:14.850403    7004 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:24:14.850403    7004 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:14.850403    7004 start.go:134] duration metric: createHost completed in 13.5348176s
	I0601 11:24:14.850403    7004 start.go:81] releasing machines lock for "embed-certs-20220601112350-9404", held for 13.5348176s
	W0601 11:24:14.850926    7004 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	I0601 11:24:14.865391    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:15.948612    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:15.948678    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.082971s)
	I0601 11:24:15.948737    7004 delete.go:82] Unable to get host status for embed-certs-20220601112350-9404, assuming it has already been deleted: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:24:15.948826    7004 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	I0601 11:24:15.948826    7004 start.go:614] Will try again in 5 seconds ...
	I0601 11:24:20.959771    7004 start.go:352] acquiring machines lock for embed-certs-20220601112350-9404: {Name:mkab52c380d7df2e54eb0e0135a3345b8a4ef27b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:24:20.959771    7004 start.go:356] acquired machines lock for "embed-certs-20220601112350-9404" in 0s
	I0601 11:24:20.959771    7004 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:24:20.959771    7004 fix.go:55] fixHost starting: 
	I0601 11:24:20.988758    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:22.047876    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:22.047876    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0591061s)
	I0601 11:24:22.047876    7004 fix.go:103] recreateIfNeeded on embed-certs-20220601112350-9404: state= err=unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:22.047876    7004 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:24:22.052369    7004 out.go:177] * docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	I0601 11:24:22.053306    7004 delete.go:124] DEMOLISHING embed-certs-20220601112350-9404 ...
	I0601 11:24:22.062194    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:23.104180    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:23.104180    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0419744s)
	W0601 11:24:23.104180    7004 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:23.104180    7004 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:23.118381    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:24.190047    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:24.190095    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0714566s)
	I0601 11:24:24.190180    7004 delete.go:82] Unable to get host status for embed-certs-20220601112350-9404, assuming it has already been deleted: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:24.199970    7004 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:24:25.237821    7004 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:25.237821    7004 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.037839s)
	I0601 11:24:25.237821    7004 kic.go:356] could not find the container embed-certs-20220601112350-9404 to remove it. will try anyways
	I0601 11:24:25.244776    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:26.325961    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:26.325961    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0811732s)
	W0601 11:24:26.325961    7004 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:26.333226    7004 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0"
	W0601 11:24:27.398827    7004 cli_runner.go:211] docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:24:27.398827    7004 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": (1.065589s)
	I0601 11:24:27.398827    7004 oci.go:625] error shutdown embed-certs-20220601112350-9404: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:28.419595    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:29.530772    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:29.530772    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1111652s)
	I0601 11:24:29.530772    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:29.530772    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:29.530772    7004 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:30.016155    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:31.105869    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:31.105931    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0886867s)
	I0601 11:24:31.106036    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:31.106169    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:31.106211    7004 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:32.019600    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:33.068791    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:33.068878    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0490255s)
	I0601 11:24:33.068878    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:33.068878    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:33.068878    7004 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:33.724379    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:34.815902    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:34.816021    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0914155s)
	I0601 11:24:34.816097    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:34.816097    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:34.816097    7004 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:35.941956    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:36.997449    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:36.997449    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.055481s)
	I0601 11:24:36.997449    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:36.997449    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:36.997449    7004 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:38.517740    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:39.548243    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:39.548243    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0299191s)
	I0601 11:24:39.548243    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:39.548243    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:39.548243    7004 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:42.612421    7004 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:24:43.678709    7004 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:43.678954    7004 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0662757s)
	I0601 11:24:43.679022    7004 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:43.679022    7004 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:24:43.679091    7004 oci.go:88] couldn't shut down embed-certs-20220601112350-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	 
	I0601 11:24:43.686458    7004 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220601112350-9404
	I0601 11:24:44.751111    7004 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220601112350-9404: (1.0646412s)
	I0601 11:24:44.757907    7004 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:24:45.845836    7004 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:45.845836    7004 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.0878123s)
	I0601 11:24:45.851831    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:24:46.942771    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:24:46.942771    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0909279s)
	I0601 11:24:46.948757    7004 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:24:46.948757    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:24:48.014263    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:48.014322    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.0654933s)
	I0601 11:24:48.014322    7004 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:24:48.014322    7004 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	W0601 11:24:48.014952    7004 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:24:48.015486    7004 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:24:49.029178    7004 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:24:49.034316    7004 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:24:49.034559    7004 start.go:165] libmachine.API.Create for "embed-certs-20220601112350-9404" (driver="docker")
	I0601 11:24:49.034559    7004 client.go:168] LocalClient.Create starting
	I0601 11:24:49.035209    7004 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:24:49.035209    7004 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:49.035209    7004 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:49.035209    7004 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:24:49.035857    7004 main.go:134] libmachine: Decoding PEM data...
	I0601 11:24:49.035857    7004 main.go:134] libmachine: Parsing certificate...
	I0601 11:24:49.052965    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:24:50.137637    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:24:50.137637    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.084659s)
	I0601 11:24:50.143995    7004 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:24:50.143995    7004 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:24:51.208907    7004 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:51.208907    7004 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.0646766s)
	I0601 11:24:51.208978    7004 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:24:51.208978    7004 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	I0601 11:24:51.217735    7004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:24:52.333822    7004 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1160507s)
	I0601 11:24:52.351012    7004 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c80050] amended:false}} dirty:map[] misses:0}
	I0601 11:24:52.351078    7004 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:24:52.367419    7004 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c80050] amended:true}} dirty:map[192.168.49.0:0xc000c80050 192.168.58.0:0xc000006a10] misses:0}
	I0601 11:24:52.367419    7004 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:24:52.367419    7004 network_create.go:115] attempt to create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:24:52.374281    7004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404
	W0601 11:24:53.465117    7004 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:53.465117    7004 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: (1.0908234s)
	E0601 11:24:53.465117    7004 network_create.go:104] error while trying to create docker network embed-certs-20220601112350-9404 192.168.58.0/24: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 58a63d7c98904385aee52fa1a7967feea975d2b9fff1ed867e4bac0e8ece1592 (br-58a63d7c9890): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:24:53.465117    7004 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 58a63d7c98904385aee52fa1a7967feea975d2b9fff1ed867e4bac0e8ece1592 (br-58a63d7c9890): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 58a63d7c98904385aee52fa1a7967feea975d2b9fff1ed867e4bac0e8ece1592 (br-58a63d7c9890): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:24:53.482774    7004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:24:54.558500    7004 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0756617s)
	I0601 11:24:54.564797    7004 cli_runner.go:164] Run: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:24:55.700061    7004 cli_runner.go:211] docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:24:55.700061    7004 cli_runner.go:217] Completed: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1347317s)
	I0601 11:24:55.700061    7004 client.go:171] LocalClient.Create took 6.6654254s
	I0601 11:24:57.712558    7004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:24:57.719266    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:24:58.809560    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:24:58.809560    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0902815s)
	I0601 11:24:58.809560    7004 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:24:59.149312    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:00.240858    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:00.240858    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0915333s)
	W0601 11:25:00.240858    7004 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:25:00.241947    7004 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:00.258850    7004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:25:00.265887    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:01.356464    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:01.356464    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0905648s)
	I0601 11:25:01.356464    7004 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:01.596425    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:02.680094    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:02.680147    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0834366s)
	W0601 11:25:02.680206    7004 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:25:02.680206    7004 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:02.680206    7004 start.go:134] duration metric: createHost completed in 13.650872s
	I0601 11:25:02.691248    7004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:25:02.697829    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:03.794958    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:03.795070    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.097117s)
	I0601 11:25:03.795128    7004 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:04.047605    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:05.159035    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:05.159110    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.111229s)
	W0601 11:25:05.159238    7004 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:25:05.159238    7004 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:05.170884    7004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:25:05.181743    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:06.257701    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:06.257701    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0759457s)
	I0601 11:25:06.257701    7004 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:06.472853    7004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:07.581202    7004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:07.581249    7004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1081365s)
	W0601 11:25:07.581662    7004 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:25:07.581725    7004 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:07.581725    7004 fix.go:57] fixHost completed within 46.6214193s
	I0601 11:25:07.581725    7004 start.go:81] releasing machines lock for "embed-certs-20220601112350-9404", held for 46.6214193s
	W0601 11:25:07.582336    7004 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220601112350-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220601112350-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	I0601 11:25:07.586410    7004 out.go:177] 
	W0601 11:25:07.588516    7004 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	W0601 11:25:07.588516    7004 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:25:07.588516    7004 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:25:07.593378    7004 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1652466s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.9284891s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:11.789652    4168 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (81.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601112246-9404 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601112246-9404 create -f testdata\busybox.yaml: exit status 1 (263.3741ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601112246-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220601112246-9404 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.104331s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9684436s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:12.035036    9412 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1508063s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9110281s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:16.104571    7532 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220601112246-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220601112246-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.8657994s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220601112246-9404 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601112246-9404 describe deploy/metrics-server -n kube-system: exit status 1 (258.7046ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601112246-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220601112246-9404 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.0696362s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.865363s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:23.181240    9632 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (26.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=3: exit status 82 (22.5221524s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	* Stopping node "old-k8s-version-20220601112246-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:24:23.440796    5072 out.go:296] Setting OutFile to fd 1748 ...
	I0601 11:24:23.500663    5072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:24:23.500663    5072 out.go:309] Setting ErrFile to fd 1544...
	I0601 11:24:23.500663    5072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:24:23.512307    5072 out.go:303] Setting JSON to false
	I0601 11:24:23.512943    5072 daemonize_windows.go:44] trying to kill existing schedule stop for profile old-k8s-version-20220601112246-9404...
	I0601 11:24:23.524333    5072 ssh_runner.go:195] Run: systemctl --version
	I0601 11:24:23.530716    5072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:26.093351    5072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:26.093452    5072 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (2.5625341s)
	I0601 11:24:26.104684    5072 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0601 11:24:26.110392    5072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:27.181641    5072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:27.181641    5072 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0712373s)
	I0601 11:24:27.181641    5072 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:27.549828    5072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:24:28.599794    5072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:24:28.599794    5072 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0499535s)
	I0601 11:24:28.599794    5072 openrc.go:165] stop output: 
	E0601 11:24:28.599794    5072 daemonize_windows.go:38] error terminating scheduled stop for profile old-k8s-version-20220601112246-9404: stopping schedule-stop service for profile old-k8s-version-20220601112246-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:28.599794    5072 mustload.go:65] Loading cluster: old-k8s-version-20220601112246-9404
	I0601 11:24:28.600798    5072 config.go:178] Loaded profile config "old-k8s-version-20220601112246-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:24:28.600798    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:28.604797    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:28.620794    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:29.686882    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:29.686959    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0658415s)
	W0601 11:24:29.686959    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:29.686959    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:29.686959    5072 retry.go:31] will retry after 937.714187ms: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:30.633145    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:30.637026    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:30.653701    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:31.755194    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:31.755345    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1013103s)
	W0601 11:24:31.755345    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:31.755345    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:31.755345    5072 retry.go:31] will retry after 1.386956246s: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:33.147515    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:33.154909    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:33.171651    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:34.238637    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:34.238637    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.066974s)
	W0601 11:24:34.238637    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:34.238637    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:34.238637    5072 retry.go:31] will retry after 2.670351914s: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:36.918389    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:36.923091    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:36.946456    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:38.026150    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:38.026150    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0795758s)
	W0601 11:24:38.026150    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:38.026150    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:38.026150    5072 retry.go:31] will retry after 1.909024939s: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:39.947485    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:39.952353    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:39.967481    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:40.977606    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:40.977606    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0100614s)
	W0601 11:24:40.977606    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:40.977606    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:40.977606    5072 retry.go:31] will retry after 3.323628727s: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:44.310223    5072 stop.go:39] StopHost: old-k8s-version-20220601112246-9404
	I0601 11:24:44.315327    5072 out.go:177] * Stopping node "old-k8s-version-20220601112246-9404"  ...
	I0601 11:24:44.334851    5072 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:24:45.420842    5072 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:24:45.420842    5072 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0859789s)
	W0601 11:24:45.420842    5072 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	W0601 11:24:45.420842    5072 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:24:45.423824    5072 out.go:177] 
	W0601 11:24:45.426836    5072 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220601112246-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220601112246-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:24:45.426836    5072 out.go:239] * 
	* 
	W0601 11:24:45.684011    5072 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:24:45.688284    5072 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1324667s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9125167s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:49.762076   10000 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (26.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.947977s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:52.709559    4152 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220601112246-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220601112246-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.053003s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1546296s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.959363s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:24:59.883524    8196 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (10.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601112334-9404 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context no-preload-20220601112334-9404 create -f testdata\busybox.yaml: exit status 1 (247.4655ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220601112334-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context no-preload-20220601112334-9404 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.143853s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0096736s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:00.363600    2692 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1827749s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0563259s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:04.597595    6976 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (8.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (118.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m54.0801525s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601112246-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220601112246-9404 in cluster old-k8s-version-20220601112246-9404
	* Pulling base image ...
	* docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:25:00.152686    7256 out.go:296] Setting OutFile to fd 1384 ...
	I0601 11:25:00.221017    7256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:00.221017    7256 out.go:309] Setting ErrFile to fd 1560...
	I0601 11:25:00.221017    7256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:00.236006    7256 out.go:303] Setting JSON to false
	I0601 11:25:00.237980    7256 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14635,"bootTime":1654068065,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:25:00.238979    7256 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:25:00.244853    7256 out.go:177] * [old-k8s-version-20220601112246-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:25:00.260853    7256 notify.go:193] Checking for updates...
	I0601 11:25:00.266853    7256 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:25:00.270855    7256 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:25:00.275918    7256 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:25:00.279876    7256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:25:00.282865    7256 config.go:178] Loaded profile config "old-k8s-version-20220601112246-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:25:00.285858    7256 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:25:00.287862    7256 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:25:03.075891    7256 docker.go:137] docker version: linux-20.10.14
	I0601 11:25:03.084008    7256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:25:05.221694    7256 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1376198s)
	I0601 11:25:05.224323    7256 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:25:04.1457005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:25:05.233390    7256 out.go:177] * Using the docker driver based on existing profile
	I0601 11:25:05.236158    7256 start.go:284] selected driver: docker
	I0601 11:25:05.236158    7256 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601112246-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601112246-9404 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Sub
net: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:25:05.237169    7256 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:25:05.315488    7256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:25:07.394878    7256 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0791911s)
	I0601 11:25:07.395424    7256 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:25:06.3340372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:25:07.395796    7256 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:25:07.395796    7256 cni.go:95] Creating CNI manager for ""
	I0601 11:25:07.395796    7256 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:25:07.395796    7256 start_flags.go:306] config:
	{Name:old-k8s-version-20220601112246-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601112246-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:25:07.400076    7256 out.go:177] * Starting control plane node old-k8s-version-20220601112246-9404 in cluster old-k8s-version-20220601112246-9404
	I0601 11:25:07.401811    7256 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:25:07.403979    7256 out.go:177] * Pulling base image ...
	I0601 11:25:07.407457    7256 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:25:07.407539    7256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:25:07.407539    7256 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:25:07.407539    7256 cache.go:57] Caching tarball of preloaded images
	I0601 11:25:07.407539    7256 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:25:07.408253    7256 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:25:07.408253    7256 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220601112246-9404\config.json ...
	I0601 11:25:08.495970    7256 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:25:08.495970    7256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:08.495970    7256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:08.495970    7256 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:25:08.495970    7256 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:25:08.495970    7256 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:25:08.495970    7256 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:25:08.495970    7256 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:25:08.495970    7256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:10.843819    7256 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:25:10.843819    7256 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:25:10.843819    7256 start.go:352] acquiring machines lock for old-k8s-version-20220601112246-9404: {Name:mk41775024acf710d15af281ba02dfa90cd6ead3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:10.843819    7256 start.go:356] acquired machines lock for "old-k8s-version-20220601112246-9404" in 0s
	I0601 11:25:10.843819    7256 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:25:10.844343    7256 fix.go:55] fixHost starting: 
	I0601 11:25:10.858238    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:11.913123    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:11.913123    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0548729s)
	I0601 11:25:11.913123    7256 fix.go:103] recreateIfNeeded on old-k8s-version-20220601112246-9404: state= err=unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:11.913123    7256 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:25:11.922185    7256 out.go:177] * docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	I0601 11:25:11.925163    7256 delete.go:124] DEMOLISHING old-k8s-version-20220601112246-9404 ...
	I0601 11:25:11.938155    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:13.045812    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:13.045874    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1075593s)
	W0601 11:25:13.045874    7256 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:13.045874    7256 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:13.059836    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:14.181745    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:14.182000    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1218964s)
	I0601 11:25:14.182086    7256 delete.go:82] Unable to get host status for old-k8s-version-20220601112246-9404, assuming it has already been deleted: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:14.189498    7256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:25:15.252421    7256 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:15.252421    7256 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.0629107s)
	I0601 11:25:15.252421    7256 kic.go:356] could not find the container old-k8s-version-20220601112246-9404 to remove it. will try anyways
	I0601 11:25:15.257788    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:16.374298    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:16.374298    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1164971s)
	W0601 11:25:16.374298    7256 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:16.380922    7256 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0"
	W0601 11:25:17.473072    7256 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:25:17.473072    7256 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": (1.0916192s)
	I0601 11:25:17.473072    7256 oci.go:625] error shutdown old-k8s-version-20220601112246-9404: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:18.488265    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:19.552471    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:19.552471    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0641941s)
	I0601 11:25:19.552471    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:19.552471    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:19.552471    7256 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:20.117660    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:21.213636    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:21.213854    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.095416s)
	I0601 11:25:21.213927    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:21.213927    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:21.214005    7256 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:22.311968    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:23.374741    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:23.374822    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0627276s)
	I0601 11:25:23.374890    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:23.374930    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:23.374999    7256 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:24.694166    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:25.741720    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:25.741850    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0473611s)
	I0601 11:25:25.741850    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:25.741850    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:25.741850    7256 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:27.341337    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:28.399067    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:28.399067    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0577178s)
	I0601 11:25:28.399067    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:28.399067    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:28.399067    7256 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:30.749096    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:31.796597    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:31.796597    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0474897s)
	I0601 11:25:31.796597    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:31.796597    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:31.796597    7256 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:36.322479    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:25:37.391920    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:37.391920    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0694296s)
	I0601 11:25:37.391920    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:37.391920    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:25:37.391920    7256 oci.go:88] couldn't shut down old-k8s-version-20220601112246-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	 
	I0601 11:25:37.399790    7256 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220601112246-9404
	I0601 11:25:38.508275    7256 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220601112246-9404: (1.1084727s)
	I0601 11:25:38.516921    7256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:25:39.580826    7256 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:39.580903    7256 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.0633208s)
	I0601 11:25:39.588257    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:25:40.623584    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:25:40.623584    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0353147s)
	I0601 11:25:40.630992    7256 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:25:40.630992    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:25:41.707119    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:41.707119    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0761142s)
	I0601 11:25:41.707119    7256 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:25:41.707119    7256 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	W0601 11:25:41.708317    7256 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:25:41.708317    7256 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:25:42.717939    7256 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:25:42.737880    7256 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:25:42.738962    7256 start.go:165] libmachine.API.Create for "old-k8s-version-20220601112246-9404" (driver="docker")
	I0601 11:25:42.738962    7256 client.go:168] LocalClient.Create starting
	I0601 11:25:42.739926    7256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:25:42.740226    7256 main.go:134] libmachine: Decoding PEM data...
	I0601 11:25:42.740297    7256 main.go:134] libmachine: Parsing certificate...
	I0601 11:25:42.740424    7256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:25:42.740625    7256 main.go:134] libmachine: Decoding PEM data...
	I0601 11:25:42.740625    7256 main.go:134] libmachine: Parsing certificate...
	I0601 11:25:42.748496    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:25:43.796458    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:25:43.796458    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.04795s)
	I0601 11:25:43.805023    7256 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:25:43.805023    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:25:44.883147    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:44.883147    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0781111s)
	I0601 11:25:44.883147    7256 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:25:44.883147    7256 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	I0601 11:25:44.890701    7256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:25:45.966099    7256 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0753863s)
	I0601 11:25:45.983279    7256 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b8e038] misses:0}
	I0601 11:25:45.983279    7256 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:25:45.983279    7256 network_create.go:115] attempt to create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:25:45.989457    7256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404
	W0601 11:25:47.096741    7256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:47.096796    7256 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: (1.1069746s)
	E0601 11:25:47.096796    7256 network_create.go:104] error while trying to create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbdac6768ee7e74aa5b4f93e3ab675c801597d7145c6c3479e1a30e7be7e2dd8 (br-dbdac6768ee7): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:25:47.096796    7256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbdac6768ee7e74aa5b4f93e3ab675c801597d7145c6c3479e1a30e7be7e2dd8 (br-dbdac6768ee7): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dbdac6768ee7e74aa5b4f93e3ab675c801597d7145c6c3479e1a30e7be7e2dd8 (br-dbdac6768ee7): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:25:47.110366    7256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:25:48.184605    7256 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0742267s)
	I0601 11:25:48.190608    7256 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:25:49.328682    7256 cli_runner.go:211] docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:25:49.328682    7256 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1380612s)
	I0601 11:25:49.328682    7256 client.go:171] LocalClient.Create took 6.5896449s
	I0601 11:25:51.350528    7256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:25:51.357521    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:52.447452    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:52.447452    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0899185s)
	I0601 11:25:52.447452    7256 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:52.626855    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:53.704324    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:53.704324    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0774558s)
	W0601 11:25:53.704324    7256 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:25:53.704324    7256 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:53.713291    7256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:25:53.720324    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:54.841921    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:54.841921    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1215839s)
	I0601 11:25:54.841921    7256 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:55.056774    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:56.254956    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:56.254956    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.19801s)
	W0601 11:25:56.254956    7256 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:25:56.254956    7256 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:56.254956    7256 start.go:134] duration metric: createHost completed in 13.5366331s
	I0601 11:25:56.265931    7256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:25:56.271547    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:57.382667    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:57.382667    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1111074s)
	I0601 11:25:57.382667    7256 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:57.722626    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:58.823746    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:58.823746    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1011073s)
	W0601 11:25:58.823746    7256 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:25:58.823746    7256 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:25:58.832684    7256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:25:58.841241    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:25:59.946624    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:25:59.946624    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1053704s)
	I0601 11:25:59.946624    7256 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:00.189282    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:01.335224    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:01.335291    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1458725s)
	W0601 11:26:01.335541    7256 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:26:01.335610    7256 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:01.335663    7256 fix.go:57] fixHost completed within 50.4907417s
	I0601 11:26:01.335663    7256 start.go:81] releasing machines lock for "old-k8s-version-20220601112246-9404", held for 50.4912657s
	W0601 11:26:01.335912    7256 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	W0601 11:26:01.336408    7256 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	I0601 11:26:01.336448    7256 start.go:614] Will try again in 5 seconds ...
	I0601 11:26:06.346819    7256 start.go:352] acquiring machines lock for old-k8s-version-20220601112246-9404: {Name:mk41775024acf710d15af281ba02dfa90cd6ead3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:26:06.346819    7256 start.go:356] acquired machines lock for "old-k8s-version-20220601112246-9404" in 0s
	I0601 11:26:06.346819    7256 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:26:06.347347    7256 fix.go:55] fixHost starting: 
	I0601 11:26:06.361475    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:07.443700    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:07.443700    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0822124s)
	I0601 11:26:07.443700    7256 fix.go:103] recreateIfNeeded on old-k8s-version-20220601112246-9404: state= err=unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:07.443700    7256 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:26:07.450467    7256 out.go:177] * docker "old-k8s-version-20220601112246-9404" container is missing, will recreate.
	I0601 11:26:07.452490    7256 delete.go:124] DEMOLISHING old-k8s-version-20220601112246-9404 ...
	I0601 11:26:07.465428    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:08.560262    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:08.560331    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0946593s)
	W0601 11:26:08.560410    7256 stop.go:75] unable to get state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:08.560439    7256 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:08.574013    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:09.628610    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:09.628610    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0545849s)
	I0601 11:26:09.628610    7256 delete.go:82] Unable to get host status for old-k8s-version-20220601112246-9404, assuming it has already been deleted: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:09.634612    7256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:26:10.689109    7256 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:10.689109    7256 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.0544856s)
	I0601 11:26:10.689109    7256 kic.go:356] could not find the container old-k8s-version-20220601112246-9404 to remove it. will try anyways
	I0601 11:26:10.696106    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:11.791021    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:11.791021    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0947976s)
	W0601 11:26:11.791021    7256 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:11.802940    7256 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0"
	W0601 11:26:12.907008    7256 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:26:12.907008    7256 cli_runner.go:217] Completed: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": (1.1040555s)
	I0601 11:26:12.907008    7256 oci.go:625] error shutdown old-k8s-version-20220601112246-9404: docker exec --privileged -t old-k8s-version-20220601112246-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:13.920446    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:15.022493    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:15.022493    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.1019051s)
	I0601 11:26:15.022493    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:15.022493    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:15.022493    7256 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:15.516992    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:16.598616    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:16.598775    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0815358s)
	I0601 11:26:16.598860    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:16.598860    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:16.598860    7256 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:17.203980    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:18.298365    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:18.298450    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0941401s)
	I0601 11:26:18.298485    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:18.298675    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:18.298759    7256 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:19.205662    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:20.283433    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:20.283717    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0777578s)
	I0601 11:26:20.283793    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:20.283793    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:20.283793    7256 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:22.296293    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:23.374685    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:23.374867    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0783795s)
	I0601 11:26:23.374936    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:23.375005    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:23.375005    7256 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:25.203027    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:26.243014    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:26.243014    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0399754s)
	I0601 11:26:26.243014    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:26.243014    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:26.243014    7256 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:28.927928    7256 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:26:29.986512    7256 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:29.986512    7256 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (1.0584759s)
	I0601 11:26:29.986512    7256 oci.go:637] temporary error verifying shutdown: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:29.986512    7256 oci.go:639] temporary error: container old-k8s-version-20220601112246-9404 status is  but expect it to be exited
	I0601 11:26:29.986512    7256 oci.go:88] couldn't shut down old-k8s-version-20220601112246-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	 
	I0601 11:26:29.996275    7256 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220601112246-9404
	I0601 11:26:31.102246    7256 cli_runner.go:217] Completed: docker rm -f -v old-k8s-version-20220601112246-9404: (1.1059575s)
	I0601 11:26:31.109246    7256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404
	W0601 11:26:32.190301    7256 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:32.190301    7256 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} old-k8s-version-20220601112246-9404: (1.0810421s)
	I0601 11:26:32.197661    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:33.339365    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:33.339365    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.14169s)
	I0601 11:26:33.346982    7256 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:26:33.346982    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:26:34.437091    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:34.437091    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0900971s)
	I0601 11:26:34.437091    7256 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:26:34.437091    7256 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	W0601 11:26:34.438098    7256 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:26:34.438098    7256 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:26:35.448853    7256 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:26:35.458334    7256 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:26:35.458334    7256 start.go:165] libmachine.API.Create for "old-k8s-version-20220601112246-9404" (driver="docker")
	I0601 11:26:35.458334    7256 client.go:168] LocalClient.Create starting
	I0601 11:26:35.459013    7256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:26:35.459532    7256 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:35.459596    7256 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:35.459596    7256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:26:35.459596    7256 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:35.459596    7256 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:35.468562    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:36.539581    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:36.539657    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0706958s)
	I0601 11:26:36.546316    7256 network_create.go:272] running [docker network inspect old-k8s-version-20220601112246-9404] to gather additional debugging logs...
	I0601 11:26:36.546846    7256 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601112246-9404
	W0601 11:26:37.600052    7256 cli_runner.go:211] docker network inspect old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:37.600052    7256 cli_runner.go:217] Completed: docker network inspect old-k8s-version-20220601112246-9404: (1.0531943s)
	I0601 11:26:37.600052    7256 network_create.go:275] error running [docker network inspect old-k8s-version-20220601112246-9404]: docker network inspect old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601112246-9404
	I0601 11:26:37.600052    7256 network_create.go:277] output of [docker network inspect old-k8s-version-20220601112246-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601112246-9404
	
	** /stderr **
	I0601 11:26:37.607026    7256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:26:38.686539    7256 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0795011s)
	I0601 11:26:38.701866    7256 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8e038] amended:false}} dirty:map[] misses:0}
	I0601 11:26:38.702946    7256 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:26:38.719525    7256 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8e038] amended:true}} dirty:map[192.168.49.0:0xc000b8e038 192.168.58.0:0xc000b8e0f0] misses:0}
	I0601 11:26:38.719761    7256 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:26:38.719761    7256 network_create.go:115] attempt to create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:26:38.726400    7256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404
	W0601 11:26:39.760874    7256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:39.760874    7256 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: (1.0344627s)
	E0601 11:26:39.760874    7256 network_create.go:104] error while trying to create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 301a6aa21b0d6553bef699adadc64ed63e1dde284eaae8be5364045b92d86b64 (br-301a6aa21b0d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:26:39.760874    7256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 301a6aa21b0d6553bef699adadc64ed63e1dde284eaae8be5364045b92d86b64 (br-301a6aa21b0d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220601112246-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 301a6aa21b0d6553bef699adadc64ed63e1dde284eaae8be5364045b92d86b64 (br-301a6aa21b0d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:26:39.772881    7256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:26:40.852509    7256 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0796153s)
	I0601 11:26:40.858982    7256 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:26:41.935887    7256 cli_runner.go:211] docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:26:41.936044    7256 cli_runner.go:217] Completed: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0768596s)
	I0601 11:26:41.936090    7256 client.go:171] LocalClient.Create took 6.4776813s
	I0601 11:26:43.959485    7256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:26:43.965698    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:45.102590    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:45.102921    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1368794s)
	I0601 11:26:45.102921    7256 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:45.395429    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:46.501446    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:46.501446    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1060037s)
	W0601 11:26:46.501446    7256 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:26:46.501446    7256 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:46.510434    7256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:26:46.517650    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:47.618781    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:47.618781    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.1011188s)
	I0601 11:26:47.618781    7256 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:47.833128    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:48.921761    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:48.921807    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0885343s)
	W0601 11:26:48.922225    7256 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:26:48.922331    7256 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:48.922331    7256 start.go:134] duration metric: createHost completed in 13.4733251s
	I0601 11:26:48.934679    7256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:26:48.940675    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:50.025895    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:50.025895    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0840676s)
	I0601 11:26:50.026210    7256 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:50.360311    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:51.426363    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:51.426363    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.06604s)
	W0601 11:26:51.426363    7256 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:26:51.426363    7256 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:51.436878    7256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:26:51.442761    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:52.516476    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:52.516476    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0737025s)
	I0601 11:26:52.516476    7256 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:52.876963    7256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404
	W0601 11:26:53.950350    7256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404 returned with exit code 1
	I0601 11:26:53.950350    7256 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: (1.0733753s)
	W0601 11:26:53.950350    7256 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:26:53.950350    7256 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220601112246-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601112246-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	I0601 11:26:53.950350    7256 fix.go:57] fixHost completed within 47.6024605s
	I0601 11:26:53.950350    7256 start.go:81] releasing machines lock for "old-k8s-version-20220601112246-9404", held for 47.602988s
	W0601 11:26:53.951033    7256 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220601112246-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220601112246-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	I0601 11:26:53.958222    7256 out.go:177] 
	W0601 11:26:53.960728    7256 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220601112246-9404 container: docker volume create old-k8s-version-20220601112246-9404 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601112246-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220601112246-9404: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220601112246-9404': mkdir /var/lib/docker/volumes/old-k8s-version-20220601112246-9404: read-only file system
	
	W0601 11:26:53.960728    7256 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:26:53.960728    7256 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:26:53.964177    7256 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220601112246-9404 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.187996s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (3.0482443s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:26:58.398410    3644 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (118.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220601112334-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220601112334-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.966324s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220601112334-9404 describe deploy/metrics-server -n kube-system

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context no-preload-20220601112334-9404 describe deploy/metrics-server -n kube-system: exit status 1 (271.9982ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220601112334-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220601112334-9404 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.103562s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9275517s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:11.897132    4616 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601112350-9404 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context embed-certs-20220601112350-9404 create -f testdata\busybox.yaml: exit status 1 (248.5664ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220601112350-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context embed-certs-20220601112350-9404 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1311668s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.9126932s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:16.096865    8728 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1458912s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.8715082s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:20.122953    6456 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (26.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220601112334-9404 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p no-preload-20220601112334-9404 --alsologtostderr -v=3: exit status 82 (22.480044s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20220601112334-9404"  ...
	* Stopping node "no-preload-20220601112334-9404"  ...
	* Stopping node "no-preload-20220601112334-9404"  ...
	* Stopping node "no-preload-20220601112334-9404"  ...
	* Stopping node "no-preload-20220601112334-9404"  ...
	* Stopping node "no-preload-20220601112334-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:25:12.154345    9988 out.go:296] Setting OutFile to fd 1880 ...
	I0601 11:25:12.217023    9988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:12.217023    9988 out.go:309] Setting ErrFile to fd 1948...
	I0601 11:25:12.217023    9988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:12.226548    9988 out.go:303] Setting JSON to false
	I0601 11:25:12.227161    9988 daemonize_windows.go:44] trying to kill existing schedule stop for profile no-preload-20220601112334-9404...
	I0601 11:25:12.237313    9988 ssh_runner.go:195] Run: systemctl --version
	I0601 11:25:12.244583    9988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:25:14.783167    9988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:25:14.783167    9988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (2.5385546s)
	I0601 11:25:14.794150    9988 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0601 11:25:14.801100    9988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:25:15.875497    9988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:25:15.875497    9988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0743844s)
	I0601 11:25:15.875497    9988 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:16.248679    9988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:25:17.314155    9988 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:25:17.314155    9988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0654645s)
	I0601 11:25:17.314155    9988 openrc.go:165] stop output: 
	E0601 11:25:17.314155    9988 daemonize_windows.go:38] error terminating scheduled stop for profile no-preload-20220601112334-9404: stopping schedule-stop service for profile no-preload-20220601112334-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:17.314155    9988 mustload.go:65] Loading cluster: no-preload-20220601112334-9404
	I0601 11:25:17.315158    9988 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:25:17.315158    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:17.319151    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:17.336154    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:18.416839    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:18.416887    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0805215s)
	W0601 11:25:18.416960    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:18.417002    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:18.417071    9988 retry.go:31] will retry after 937.714187ms: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:19.367799    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:19.374731    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:19.388636    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:20.431782    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:20.431782    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0421364s)
	W0601 11:25:20.431782    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:20.431782    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:20.431782    9988 retry.go:31] will retry after 1.386956246s: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:21.820432    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:21.825333    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:21.841264    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:22.906312    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:22.906312    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0650356s)
	W0601 11:25:22.906312    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:22.906312    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:22.906312    9988 retry.go:31] will retry after 2.670351914s: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:25.585647    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:25.591107    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:25.605997    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:26.704922    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:26.704922    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.098912s)
	W0601 11:25:26.704922    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:26.704922    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:26.704922    9988 retry.go:31] will retry after 1.909024939s: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:28.623350    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:28.628466    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:28.641791    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:29.651541    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:29.651541    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0097384s)
	W0601 11:25:29.651541    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:29.651541    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:29.651541    9988 retry.go:31] will retry after 3.323628727s: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:32.975969    9988 stop.go:39] StopHost: no-preload-20220601112334-9404
	I0601 11:25:32.982037    9988 out.go:177] * Stopping node "no-preload-20220601112334-9404"  ...
	I0601 11:25:32.996787    9988 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:25:34.079466    9988 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:34.079466    9988 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0826667s)
	W0601 11:25:34.079466    9988 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	W0601 11:25:34.079466    9988 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:25:34.083339    9988 out.go:177] 
	W0601 11:25:34.085544    9988 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220601112334-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220601112334-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:25:34.085665    9988 out.go:239] * 
	* 
	W0601 11:25:34.343962    9988 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:34.349116    9988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p no-preload-20220601112334-9404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.0967118s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.8846463s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:38.352598    5192 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (26.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220601112350-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220601112350-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.8630949s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220601112350-9404 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context embed-certs-20220601112350-9404 describe deploy/metrics-server -n kube-system: exit status 1 (229.2878ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220601112350-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220601112350-9404 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.0914007s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.8837773s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:27.208079   10220 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (26.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220601112350-9404 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p embed-certs-20220601112350-9404 --alsologtostderr -v=3: exit status 82 (22.6295452s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20220601112350-9404"  ...
	* Stopping node "embed-certs-20220601112350-9404"  ...
	* Stopping node "embed-certs-20220601112350-9404"  ...
	* Stopping node "embed-certs-20220601112350-9404"  ...
	* Stopping node "embed-certs-20220601112350-9404"  ...
	* Stopping node "embed-certs-20220601112350-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:25:27.481703   10100 out.go:296] Setting OutFile to fd 1520 ...
	I0601 11:25:27.544125   10100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:27.544125   10100 out.go:309] Setting ErrFile to fd 1608...
	I0601 11:25:27.544125   10100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:27.554896   10100 out.go:303] Setting JSON to false
	I0601 11:25:27.554896   10100 daemonize_windows.go:44] trying to kill existing schedule stop for profile embed-certs-20220601112350-9404...
	I0601 11:25:27.572829   10100 ssh_runner.go:195] Run: systemctl --version
	I0601 11:25:27.578843   10100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:30.062035   10100 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:30.062198   10100 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (2.4829524s)
	I0601 11:25:30.072320   10100 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0601 11:25:30.078205   10100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:31.115606   10100 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:31.115862   10100 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0373894s)
	I0601 11:25:31.116007   10100 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:31.490714   10100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:25:32.539247   10100 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:25:32.541399   10100 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0482605s)
	I0601 11:25:32.541955   10100 openrc.go:165] stop output: 
	E0601 11:25:32.541955   10100 daemonize_windows.go:38] error terminating scheduled stop for profile embed-certs-20220601112350-9404: stopping schedule-stop service for profile embed-certs-20220601112350-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:32.542104   10100 mustload.go:65] Loading cluster: embed-certs-20220601112350-9404
	I0601 11:25:32.543160   10100 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:25:32.543184   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:32.546814   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:32.567206   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:33.638629   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:33.638629   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0714103s)
	W0601 11:25:33.638629   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:33.638629   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:33.638629   10100 retry.go:31] will retry after 937.714187ms: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:34.582404   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:34.593637   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:34.610067   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:35.670329   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:35.670365   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0600553s)
	W0601 11:25:35.670428   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:35.670471   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:35.670471   10100 retry.go:31] will retry after 1.386956246s: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:37.062114   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:37.074420   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:37.090963   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:38.197027   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:38.197174   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1057865s)
	W0601 11:25:38.197249   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:38.197326   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:38.197326   10100 retry.go:31] will retry after 2.670351914s: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:40.877823   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:40.882073   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:40.916472   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:42.020977   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:42.021169   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1044921s)
	W0601 11:25:42.021246   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:42.021310   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:42.021367   10100 retry.go:31] will retry after 1.909024939s: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:43.935613   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:43.959605   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:43.974540   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:45.083442   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:45.083527   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1087231s)
	W0601 11:25:45.083647   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:45.083647   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:45.083647   10100 retry.go:31] will retry after 3.323628727s: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:48.421141   10100 stop.go:39] StopHost: embed-certs-20220601112350-9404
	I0601 11:25:48.438208   10100 out.go:177] * Stopping node "embed-certs-20220601112350-9404"  ...
	I0601 11:25:48.455673   10100 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:25:49.565646   10100 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:25:49.565646   10100 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1099607s)
	W0601 11:25:49.565646   10100 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	W0601 11:25:49.565646   10100 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:25:49.569867   10100 out.go:177] 
	W0601 11:25:49.572903   10100 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220601112350-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220601112350-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:25:49.572903   10100 out.go:239] * 
	* 
	W0601 11:25:49.820604   10100 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:25:49.824337   10100 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p embed-certs-20220601112350-9404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1458709s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.0859825s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:54.081598    4516 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (26.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9001651s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:41.253234    5152 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220601112334-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220601112334-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.8974516s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1436262s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9138264s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:48.232611    9352 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (119.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.8177004s)

                                                
                                                
-- stdout --
	* [no-preload-20220601112334-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220601112334-9404 in cluster no-preload-20220601112334-9404
	* Pulling base image ...
	* docker "no-preload-20220601112334-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220601112334-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:25:48.480641    9244 out.go:296] Setting OutFile to fd 1428 ...
	I0601 11:25:48.548531    9244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:48.548576    9244 out.go:309] Setting ErrFile to fd 1576...
	I0601 11:25:48.548606    9244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:25:48.561101    9244 out.go:303] Setting JSON to false
	I0601 11:25:48.563726    9244 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14684,"bootTime":1654068064,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:25:48.563726    9244 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:25:48.568119    9244 out.go:177] * [no-preload-20220601112334-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:25:48.573706    9244 notify.go:193] Checking for updates...
	I0601 11:25:48.575872    9244 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:25:48.578270    9244 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:25:48.580957    9244 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:25:48.583645    9244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:25:48.588697    9244 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:25:48.588697    9244 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:25:51.309519    9244 docker.go:137] docker version: linux-20.10.14
	I0601 11:25:51.315520    9244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:25:53.486696    9244 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1701527s)
	I0601 11:25:53.486696    9244 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:25:52.3843214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:25:53.491492    9244 out.go:177] * Using the docker driver based on existing profile
	I0601 11:25:53.493804    9244 start.go:284] selected driver: docker
	I0601 11:25:53.493804    9244 start.go:806] validating driver "docker" against &{Name:no-preload-20220601112334-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601112334-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:25:53.493804    9244 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:25:53.566516    9244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:25:55.674568    9244 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1080276s)
	I0601 11:25:55.674568    9244 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-01 11:25:54.6142368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:25:55.674568    9244 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:25:55.674568    9244 cni.go:95] Creating CNI manager for ""
	I0601 11:25:55.674568    9244 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:25:55.674568    9244 start_flags.go:306] config:
	{Name:no-preload-20220601112334-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601112334-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:25:55.678545    9244 out.go:177] * Starting control plane node no-preload-20220601112334-9404 in cluster no-preload-20220601112334-9404
	I0601 11:25:55.681040    9244 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:25:55.683271    9244 out.go:177] * Pulling base image ...
	I0601 11:25:55.686107    9244 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:25:55.686684    9244 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:25:55.686684    9244 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220601112334-9404\config.json ...
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6
	I0601 11:25:55.686908    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6
	I0601 11:25:55.885018    9244 cache.go:107] acquiring lock: {Name:mk9255ee8c390126b963cceac501a1fcc40ecb6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.885094    9244 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.885094    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0601 11:25:55.885094    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 exists
	I0601 11:25:55.885646    9244 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 198.1839ms
	I0601 11:25:55.885646    9244 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0601 11:25:55.885923    9244 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.23.6" took 199.0122ms
	I0601 11:25:55.885976    9244 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.23.6 succeeded
	I0601 11:25:55.885976    9244 cache.go:107] acquiring lock: {Name:mk40b809628c4e9673e2a41bf9fb31b8a6b3529d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.885976    9244 cache.go:107] acquiring lock: {Name:mk1cf2f2eee53b81f1c95945c2dd3783d0c7d992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.885976    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 exists
	I0601 11:25:55.885976    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 exists
	I0601 11:25:55.885976    9244 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.23.6" took 199.0657ms
	I0601 11:25:55.886521    9244 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.23.6 succeeded
	I0601 11:25:55.886521    9244 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.23.6" took 199.6104ms
	I0601 11:25:55.886644    9244 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.23.6 succeeded
	I0601 11:25:55.886589    9244 cache.go:107] acquiring lock: {Name:mkb7d2f7b32c5276784ba454e50c746d7fc6c05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.886912    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 exists
	I0601 11:25:55.886912    9244 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.6" took 200.002ms
	I0601 11:25:55.886912    9244 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.6 succeeded
	I0601 11:25:55.895618    9244 cache.go:107] acquiring lock: {Name:mka0a7f9fce0e132e7529c42bed359c919fc231b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.895618    9244 cache.go:107] acquiring lock: {Name:mk3772b9dcb36c3cbc3aa4dfbe66c5266092e2c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.895618    9244 cache.go:107] acquiring lock: {Name:mk90a34f529b9ea089d74e18a271c58e34606f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:55.895618    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 exists
	I0601 11:25:55.895618    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 exists
	I0601 11:25:55.895618    9244 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 exists
	I0601 11:25:55.895618    9244 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.5.1-0" took 208.7075ms
	I0601 11:25:55.895618    9244 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.5.1-0 succeeded
	I0601 11:25:55.895618    9244 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns\\coredns_v1.8.6" took 208.7075ms
	I0601 11:25:55.896152    9244 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns\coredns_v1.8.6 succeeded
	I0601 11:25:55.896152    9244 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.23.6" took 209.2411ms
	I0601 11:25:55.896152    9244 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.23.6 succeeded
	I0601 11:25:55.896281    9244 cache.go:87] Successfully saved all images to host disk.
	I0601 11:25:56.883051    9244 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:25:56.883304    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:56.883304    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:56.883304    9244 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:25:56.883304    9244 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:25:56.883304    9244 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:25:56.883966    9244 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:25:56.883966    9244 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:25:56.883966    9244 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:25:59.276454    9244 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:25:59.276454    9244 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:25:59.276667    9244 start.go:352] acquiring machines lock for no-preload-20220601112334-9404: {Name:mk28c43b16c7470d23bc1a71d3a7541a869ef61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:25:59.276940    9244 start.go:356] acquired machines lock for "no-preload-20220601112334-9404" in 177.6µs
	I0601 11:25:59.277091    9244 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:25:59.277091    9244 fix.go:55] fixHost starting: 
	I0601 11:25:59.298288    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:00.411997    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:00.411997    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1136962s)
	I0601 11:26:00.411997    9244 fix.go:103] recreateIfNeeded on no-preload-20220601112334-9404: state= err=unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:00.411997    9244 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:26:00.415482    9244 out.go:177] * docker "no-preload-20220601112334-9404" container is missing, will recreate.
	I0601 11:26:00.424365    9244 delete.go:124] DEMOLISHING no-preload-20220601112334-9404 ...
	I0601 11:26:00.439963    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:01.543568    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:01.543568    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1035922s)
	W0601 11:26:01.543568    9244 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:01.543568    9244 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:01.556613    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:02.650711    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:02.650759    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0939658s)
	I0601 11:26:02.650848    9244 delete.go:82] Unable to get host status for no-preload-20220601112334-9404, assuming it has already been deleted: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:02.658667    9244 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:26:03.782968    9244 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:03.782968    9244 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.1242875s)
	I0601 11:26:03.782968    9244 kic.go:356] could not find the container no-preload-20220601112334-9404 to remove it. will try anyways
	I0601 11:26:03.788986    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:04.879529    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:04.879529    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0905308s)
	W0601 11:26:04.879529    9244 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:04.887151    9244 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0"
	W0601 11:26:05.969910    9244 cli_runner.go:211] docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:26:05.969974    9244 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": (1.082608s)
	I0601 11:26:05.969987    9244 oci.go:625] error shutdown no-preload-20220601112334-9404: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:06.989228    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:08.111231    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:08.111231    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1219908s)
	I0601 11:26:08.111231    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:08.111231    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:08.111231    9244 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:08.677622    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:09.783712    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:09.783712    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1059352s)
	I0601 11:26:09.783712    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:09.783712    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:09.783712    9244 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:10.872518    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:11.993581    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:11.993581    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1209642s)
	I0601 11:26:11.993581    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:11.993581    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:11.993581    9244 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:13.325086    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:14.428327    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:14.428327    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1032285s)
	I0601 11:26:14.428327    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:14.428327    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:14.428327    9244 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:16.030508    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:17.116827    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:17.116877    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0859399s)
	I0601 11:26:17.116935    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:17.116988    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:17.117024    9244 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:19.475074    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:20.538614    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:20.538614    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0635277s)
	I0601 11:26:20.538614    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:20.538614    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:20.538614    9244 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:25.067287    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:26.116652    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:26.116652    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0493529s)
	I0601 11:26:26.116652    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:26.116652    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:26:26.116652    9244 oci.go:88] couldn't shut down no-preload-20220601112334-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	 
	I0601 11:26:26.123653    9244 cli_runner.go:164] Run: docker rm -f -v no-preload-20220601112334-9404
	I0601 11:26:27.183899    9244 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220601112334-9404: (1.0602335s)
	I0601 11:26:27.190791    9244 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:26:28.238408    9244 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:28.238408    9244 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.0476052s)
	I0601 11:26:28.244363    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:29.296583    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:29.296583    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0522071s)
	I0601 11:26:29.302629    9244 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:26:29.302629    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:26:30.395040    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:30.395040    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.0923986s)
	I0601 11:26:30.395040    9244 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:26:30.395040    9244 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	W0601 11:26:30.396050    9244 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:26:30.396050    9244 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:26:31.415376    9244 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:26:31.421271    9244 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:26:31.421912    9244 start.go:165] libmachine.API.Create for "no-preload-20220601112334-9404" (driver="docker")
	I0601 11:26:31.421912    9244 client.go:168] LocalClient.Create starting
	I0601 11:26:31.422707    9244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:26:31.423039    9244 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:31.423039    9244 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:31.423039    9244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:26:31.423039    9244 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:31.423039    9244 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:31.432821    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:32.552032    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:32.552032    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1191983s)
	I0601 11:26:32.560013    9244 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:26:32.560013    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:26:33.683821    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:33.683926    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.1237231s)
	I0601 11:26:33.683926    9244 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:26:33.683926    9244 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	I0601 11:26:33.689865    9244 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:26:34.801360    9244 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1113129s)
	I0601 11:26:34.820884    9244 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0002ac0e0] misses:0}
	I0601 11:26:34.820976    9244 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:26:34.820976    9244 network_create.go:115] attempt to create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:26:34.830534    9244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404
	W0601 11:26:35.904723    9244 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:35.904723    9244 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: (1.0741771s)
	E0601 11:26:35.904723    9244 network_create.go:104] error while trying to create docker network no-preload-20220601112334-9404 192.168.49.0/24: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 536933db3234886c8eb3cd1d61a13d3154f4e1aa9b01c00da2a60ef6646a7c53 (br-536933db3234): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:26:35.904723    9244 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 536933db3234886c8eb3cd1d61a13d3154f4e1aa9b01c00da2a60ef6646a7c53 (br-536933db3234): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 536933db3234886c8eb3cd1d61a13d3154f4e1aa9b01c00da2a60ef6646a7c53 (br-536933db3234): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:26:35.918723    9244 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:26:36.992319    9244 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0734007s)
	I0601 11:26:36.999722    9244 cli_runner.go:164] Run: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:26:38.058164    9244 cli_runner.go:211] docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:26:38.058164    9244 cli_runner.go:217] Completed: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0583719s)
	I0601 11:26:38.058164    9244 client.go:171] LocalClient.Create took 6.6361761s
	I0601 11:26:40.078486    9244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:26:40.085588    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:41.178295    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:41.178295    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0926949s)
	I0601 11:26:41.178295    9244 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:41.360730    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:42.445535    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:42.445535    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.084792s)
	W0601 11:26:42.445535    9244 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:26:42.445535    9244 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:42.455500    9244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:26:42.461158    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:43.555221    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:43.555285    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0940179s)
	I0601 11:26:43.555446    9244 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:43.772691    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:44.837783    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:44.837783    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0650804s)
	W0601 11:26:44.837783    9244 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:26:44.837783    9244 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:44.837783    9244 start.go:134] duration metric: createHost completed in 13.422254s
	I0601 11:26:44.846875    9244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:26:44.853782    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:45.965064    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:45.965064    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1112694s)
	I0601 11:26:45.965064    9244 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:46.304499    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:47.386152    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:47.386152    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0814792s)
	W0601 11:26:47.386152    9244 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:26:47.386152    9244 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:47.396869    9244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:26:47.403750    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:48.467667    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:48.467849    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0639049s)
	I0601 11:26:48.467981    9244 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:48.698223    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:26:49.790546    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:49.790670    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0921115s)
	W0601 11:26:49.790745    9244 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:26:49.790745    9244 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:49.790745    9244 fix.go:57] fixHost completed within 50.513078s
	I0601 11:26:49.790745    9244 start.go:81] releasing machines lock for "no-preload-20220601112334-9404", held for 50.5132296s
	W0601 11:26:49.790745    9244 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	W0601 11:26:49.791372    9244 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	I0601 11:26:49.791407    9244 start.go:614] Will try again in 5 seconds ...
	I0601 11:26:54.795954    9244 start.go:352] acquiring machines lock for no-preload-20220601112334-9404: {Name:mk28c43b16c7470d23bc1a71d3a7541a869ef61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:26:54.795954    9244 start.go:356] acquired machines lock for "no-preload-20220601112334-9404" in 0s
	I0601 11:26:54.795954    9244 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:26:54.795954    9244 fix.go:55] fixHost starting: 
	I0601 11:26:54.812159    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:55.989286    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:55.989286    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1770665s)
	I0601 11:26:55.989286    9244 fix.go:103] recreateIfNeeded on no-preload-20220601112334-9404: state= err=unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:55.989286    9244 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:26:55.992294    9244 out.go:177] * docker "no-preload-20220601112334-9404" container is missing, will recreate.
	I0601 11:26:55.996297    9244 delete.go:124] DEMOLISHING no-preload-20220601112334-9404 ...
	I0601 11:26:56.011290    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:57.129820    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:57.129820    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1185171s)
	W0601 11:26:57.129820    9244 stop.go:75] unable to get state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:57.129820    9244 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:57.145824    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:26:58.273699    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:58.273699    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1278622s)
	I0601 11:26:58.273699    9244 delete.go:82] Unable to get host status for no-preload-20220601112334-9404, assuming it has already been deleted: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:26:58.280626    9244 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:26:59.393744    9244 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:26:59.393744    9244 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.1131053s)
	I0601 11:26:59.393744    9244 kic.go:356] could not find the container no-preload-20220601112334-9404 to remove it. will try anyways
	I0601 11:26:59.399734    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:00.493012    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:00.493012    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0932658s)
	W0601 11:27:00.493012    9244 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:00.500068    9244 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0"
	W0601 11:27:01.602240    9244 cli_runner.go:211] docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:27:01.602400    9244 cli_runner.go:217] Completed: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": (1.101958s)
	I0601 11:27:01.602437    9244 oci.go:625] error shutdown no-preload-20220601112334-9404: docker exec --privileged -t no-preload-20220601112334-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:02.616000    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:03.737323    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:03.737323    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1210716s)
	I0601 11:27:03.737323    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:03.737323    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:03.737323    9244 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:04.244565    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:05.364472    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:05.364584    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1197483s)
	I0601 11:27:05.364584    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:05.364584    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:05.364584    9244 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:05.967418    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:07.091406    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:07.091406    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1239762s)
	I0601 11:27:07.091406    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:07.091406    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:07.091406    9244 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:08.003073    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:09.067936    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:09.067936    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.0648503s)
	I0601 11:27:09.067936    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:09.067936    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:09.067936    9244 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:11.074657    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:12.178745    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:12.178984    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1040756s)
	I0601 11:27:12.179094    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:12.179094    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:12.179154    9244 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:14.015873    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:15.105794    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:15.105794    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.089908s)
	I0601 11:27:15.105794    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:15.105794    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:15.105794    9244 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:17.797517    9244 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:27:18.948541    9244 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:18.948541    9244 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (1.1510107s)
	I0601 11:27:18.948541    9244 oci.go:637] temporary error verifying shutdown: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:18.948541    9244 oci.go:639] temporary error: container no-preload-20220601112334-9404 status is  but expect it to be exited
	I0601 11:27:18.948541    9244 oci.go:88] couldn't shut down no-preload-20220601112334-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	 
	I0601 11:27:18.955544    9244 cli_runner.go:164] Run: docker rm -f -v no-preload-20220601112334-9404
	I0601 11:27:20.047331    9244 cli_runner.go:217] Completed: docker rm -f -v no-preload-20220601112334-9404: (1.0917745s)
	I0601 11:27:20.055330    9244 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220601112334-9404
	W0601 11:27:21.148738    9244 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:21.148738    9244 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} no-preload-20220601112334-9404: (1.093396s)
	I0601 11:27:21.150596    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:27:22.249771    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:27:22.249771    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0991632s)
	I0601 11:27:22.257479    9244 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:27:22.257545    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:27:23.356905    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:23.356980    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.0991526s)
	I0601 11:27:23.356980    9244 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:27:23.356980    9244 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	W0601 11:27:23.358054    9244 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:27:23.358054    9244 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:27:24.369031    9244 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:27:24.372588    9244 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:27:24.372986    9244 start.go:165] libmachine.API.Create for "no-preload-20220601112334-9404" (driver="docker")
	I0601 11:27:24.373043    9244 client.go:168] LocalClient.Create starting
	I0601 11:27:24.373627    9244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:27:24.373849    9244 main.go:134] libmachine: Decoding PEM data...
	I0601 11:27:24.373908    9244 main.go:134] libmachine: Parsing certificate...
	I0601 11:27:24.374117    9244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:27:24.374342    9244 main.go:134] libmachine: Decoding PEM data...
	I0601 11:27:24.374342    9244 main.go:134] libmachine: Parsing certificate...
	I0601 11:27:24.387156    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:27:25.468076    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:27:25.468143    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.080666s)
	I0601 11:27:25.474731    9244 network_create.go:272] running [docker network inspect no-preload-20220601112334-9404] to gather additional debugging logs...
	I0601 11:27:25.474731    9244 cli_runner.go:164] Run: docker network inspect no-preload-20220601112334-9404
	W0601 11:27:26.546572    9244 cli_runner.go:211] docker network inspect no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:26.546572    9244 cli_runner.go:217] Completed: docker network inspect no-preload-20220601112334-9404: (1.071828s)
	I0601 11:27:26.546572    9244 network_create.go:275] error running [docker network inspect no-preload-20220601112334-9404]: docker network inspect no-preload-20220601112334-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220601112334-9404
	I0601 11:27:26.546572    9244 network_create.go:277] output of [docker network inspect no-preload-20220601112334-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220601112334-9404
	
	** /stderr **
	I0601 11:27:26.554297    9244 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:27:27.663235    9244 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1089257s)
	I0601 11:27:27.680254    9244 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002ac0e0] amended:false}} dirty:map[] misses:0}
	I0601 11:27:27.681250    9244 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:27:27.696253    9244 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002ac0e0] amended:true}} dirty:map[192.168.49.0:0xc0002ac0e0 192.168.58.0:0xc0002ac558] misses:0}
	I0601 11:27:27.696253    9244 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:27:27.696253    9244 network_create.go:115] attempt to create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:27:27.702245    9244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404
	W0601 11:27:28.807268    9244 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:28.807308    9244 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: (1.1049082s)
	E0601 11:27:28.807579    9244 network_create.go:104] error while trying to create docker network no-preload-20220601112334-9404 192.168.58.0/24: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a1226776a71d687e0643c264d583735f4d6f63c2f8109dd1f98a38fd2b2a0c28 (br-a1226776a71d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:27:28.807711    9244 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a1226776a71d687e0643c264d583735f4d6f63c2f8109dd1f98a38fd2b2a0c28 (br-a1226776a71d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220601112334-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220601112334-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a1226776a71d687e0643c264d583735f4d6f63c2f8109dd1f98a38fd2b2a0c28 (br-a1226776a71d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:27:28.822202    9244 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:27:29.942255    9244 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1200408s)
	I0601 11:27:29.948259    9244 cli_runner.go:164] Run: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:27:31.033170    9244 cli_runner.go:211] docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:27:31.033231    9244 cli_runner.go:217] Completed: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0847821s)
	I0601 11:27:31.033293    9244 client.go:171] LocalClient.Create took 6.6600501s
	I0601 11:27:33.054734    9244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:27:33.060888    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:34.151907    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:34.151907    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0910062s)
	I0601 11:27:34.151907    9244 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:34.426112    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:35.542178    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:35.542178    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1160536s)
	W0601 11:27:35.542178    9244 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:27:35.542178    9244 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:35.551178    9244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:27:35.558182    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:36.624567    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:36.624567    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.066373s)
	I0601 11:27:36.624567    9244 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:36.837499    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:37.903839    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:37.903839    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0663281s)
	W0601 11:27:37.903839    9244 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:27:37.903839    9244 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:37.903839    9244 start.go:134] duration metric: createHost completed in 13.5346532s
	I0601 11:27:37.916928    9244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:27:37.923248    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:38.995245    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:38.995370    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0719277s)
	I0601 11:27:38.995370    9244 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:39.320757    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:40.397767    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:40.397767    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0769975s)
	W0601 11:27:40.397767    9244 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:27:40.397767    9244 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:40.407709    9244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:27:40.413506    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:41.509486    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:41.509614    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.0952235s)
	I0601 11:27:41.509679    9244 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:41.868795    9244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404
	W0601 11:27:43.021169    9244 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404 returned with exit code 1
	I0601 11:27:43.021169    9244 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: (1.1523609s)
	W0601 11:27:43.021169    9244 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:27:43.021169    9244 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220601112334-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601112334-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	I0601 11:27:43.021169    9244 fix.go:57] fixHost completed within 48.2246682s
	I0601 11:27:43.021169    9244 start.go:81] releasing machines lock for "no-preload-20220601112334-9404", held for 48.2246682s
	W0601 11:27:43.022174    9244 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220601112334-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220601112334-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	I0601 11:27:43.028167    9244 out.go:177] 
	W0601 11:27:43.030176    9244 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220601112334-9404 container: docker volume create no-preload-20220601112334-9404 --label name.minikube.sigs.k8s.io=no-preload-20220601112334-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220601112334-9404: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220601112334-9404': mkdir /var/lib/docker/volumes/no-preload-20220601112334-9404: read-only file system
	
	W0601 11:27:43.031180    9244 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:27:43.031180    9244 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:27:43.034181    9244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20220601112334-9404 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1455975s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9573352s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:47.354538    9020 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (119.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.1139606s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:25:57.195376    7604 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220601112350-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220601112350-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0135701s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1790931s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.0127514s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:26:04.410409    4252 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (118.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m54.0752837s)

                                                
                                                
-- stdout --
	* [embed-certs-20220601112350-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220601112350-9404 in cluster embed-certs-20220601112350-9404
	* Pulling base image ...
	* docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:26:04.665753    9528 out.go:296] Setting OutFile to fd 1624 ...
	I0601 11:26:04.723803    9528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:26:04.723803    9528 out.go:309] Setting ErrFile to fd 1640...
	I0601 11:26:04.723803    9528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:26:04.734718    9528 out.go:303] Setting JSON to false
	I0601 11:26:04.736592    9528 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14700,"bootTime":1654068064,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:26:04.737635    9528 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:26:04.742634    9528 out.go:177] * [embed-certs-20220601112350-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:26:04.744467    9528 notify.go:193] Checking for updates...
	I0601 11:26:04.747186    9528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:26:04.749859    9528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:26:04.753101    9528 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:26:04.755105    9528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:26:04.758298    9528 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:26:04.759316    9528 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:26:07.428014    9528 docker.go:137] docker version: linux-20.10.14
	I0601 11:26:07.436748    9528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:26:09.549265    9528 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.112446s)
	I0601 11:26:09.550317    9528 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:26:08.4923154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:26:09.554467    9528 out.go:177] * Using the docker driver based on existing profile
	I0601 11:26:09.556490    9528 start.go:284] selected driver: docker
	I0601 11:26:09.556490    9528 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601112350-9404 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601112350-9404 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:26:09.556490    9528 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:26:09.620611    9528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:26:11.729562    9528 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1088608s)
	I0601 11:26:11.729953    9528 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:26:10.6517475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:26:11.730002    9528 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:26:11.730002    9528 cni.go:95] Creating CNI manager for ""
	I0601 11:26:11.730002    9528 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:26:11.730002    9528 start_flags.go:306] config:
	{Name:embed-certs-20220601112350-9404 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601112350-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:26:11.733414    9528 out.go:177] * Starting control plane node embed-certs-20220601112350-9404 in cluster embed-certs-20220601112350-9404
	I0601 11:26:11.736114    9528 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:26:11.739588    9528 out.go:177] * Pulling base image ...
	I0601 11:26:11.741780    9528 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:26:11.741832    9528 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:26:11.742041    9528 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:26:11.742041    9528 cache.go:57] Caching tarball of preloaded images
	I0601 11:26:11.742826    9528 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:26:11.743024    9528 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:26:11.743239    9528 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220601112350-9404\config.json ...
	I0601 11:26:12.875057    9528 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:26:12.875057    9528 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:26:12.875057    9528 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:26:12.875057    9528 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:26:12.875057    9528 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:26:12.875057    9528 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:26:12.875057    9528 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:26:12.875057    9528 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:26:12.875057    9528 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:26:15.185972    9528 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:26:15.185972    9528 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:26:15.185972    9528 start.go:352] acquiring machines lock for embed-certs-20220601112350-9404: {Name:mkab52c380d7df2e54eb0e0135a3345b8a4ef27b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:26:15.186629    9528 start.go:356] acquired machines lock for "embed-certs-20220601112350-9404" in 629.9µs
	I0601 11:26:15.186629    9528 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:26:15.186629    9528 fix.go:55] fixHost starting: 
	I0601 11:26:15.199173    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:16.255093    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:16.255124    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0556941s)
	I0601 11:26:16.255222    9528 fix.go:103] recreateIfNeeded on embed-certs-20220601112350-9404: state= err=unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:16.255286    9528 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:26:16.258356    9528 out.go:177] * docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	I0601 11:26:16.261216    9528 delete.go:124] DEMOLISHING embed-certs-20220601112350-9404 ...
	I0601 11:26:16.275032    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:17.336407    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:17.336407    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0613635s)
	W0601 11:26:17.336407    9528 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:17.336407    9528 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:17.352958    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:18.421777    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:18.421777    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0688064s)
	I0601 11:26:18.421777    9528 delete.go:82] Unable to get host status for embed-certs-20220601112350-9404, assuming it has already been deleted: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:18.428780    9528 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:26:19.437017    9528 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:19.437050    9528 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.0080874s)
	I0601 11:26:19.437050    9528 kic.go:356] could not find the container embed-certs-20220601112350-9404 to remove it. will try anyways
	I0601 11:26:19.443540    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:20.523606    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:20.523606    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.079814s)
	W0601 11:26:20.523606    9528 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:20.531704    9528 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0"
	W0601 11:26:21.550421    9528 cli_runner.go:211] docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:26:21.550421    9528 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": (1.0180242s)
	I0601 11:26:21.550689    9528 oci.go:625] error shutdown embed-certs-20220601112350-9404: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:22.565646    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:23.610964    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:23.610964    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0450665s)
	I0601 11:26:23.611074    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:23.611192    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:23.611257    9528 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:24.171810    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:25.212997    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:25.212997    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0411755s)
	I0601 11:26:25.212997    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:25.212997    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:25.212997    9528 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:26.313699    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:27.389344    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:27.389344    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0756323s)
	I0601 11:26:27.389344    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:27.389344    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:27.389344    9528 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:28.708278    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:29.781906    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:29.781906    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0735592s)
	I0601 11:26:29.781906    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:29.781906    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:29.781906    9528 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:31.380607    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:32.472010    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:32.472010    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0913906s)
	I0601 11:26:32.472010    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:32.472010    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:32.472010    9528 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:34.825009    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:35.920726    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:35.920726    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0956705s)
	I0601 11:26:35.920726    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:35.920726    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:35.920726    9528 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:40.433777    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:26:41.539782    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:26:41.539841    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1057779s)
	I0601 11:26:41.539841    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:41.539841    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:26:41.539841    9528 oci.go:88] couldn't shut down embed-certs-20220601112350-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	 
	I0601 11:26:41.546467    9528 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220601112350-9404
	I0601 11:26:42.648428    9528 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220601112350-9404: (1.1019485s)
	I0601 11:26:42.654423    9528 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:26:43.723651    9528 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:43.723651    9528 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.0692166s)
	I0601 11:26:43.729651    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:44.805772    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:44.805772    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0761087s)
	I0601 11:26:44.812901    9528 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:26:44.812901    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:26:45.918144    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:45.918144    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.1052297s)
	I0601 11:26:45.918144    9528 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:26:45.918144    9528 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	W0601 11:26:45.919155    9528 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:26:45.919155    9528 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:26:46.926437    9528 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:26:46.933926    9528 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:26:46.934443    9528 start.go:165] libmachine.API.Create for "embed-certs-20220601112350-9404" (driver="docker")
	I0601 11:26:46.934648    9528 client.go:168] LocalClient.Create starting
	I0601 11:26:46.935175    9528 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:26:46.935257    9528 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:46.935257    9528 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:46.935257    9528 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:26:46.935774    9528 main.go:134] libmachine: Decoding PEM data...
	I0601 11:26:46.935836    9528 main.go:134] libmachine: Parsing certificate...
	I0601 11:26:46.943362    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:26:48.026564    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:26:48.026564    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0831897s)
	I0601 11:26:48.035329    9528 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:26:48.035329    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:26:49.113083    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:49.113083    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.0777422s)
	I0601 11:26:49.113083    9528 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:26:49.113083    9528 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	I0601 11:26:49.120600    9528 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:26:50.196837    9528 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0762241s)
	I0601 11:26:50.216451    9528 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001244e8] misses:0}
	I0601 11:26:50.216451    9528 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:26:50.216451    9528 network_create.go:115] attempt to create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:26:50.225240    9528 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404
	W0601 11:26:51.286043    9528 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:51.286260    9528 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: (1.0606172s)
	E0601 11:26:51.286337    9528 network_create.go:104] error while trying to create docker network embed-certs-20220601112350-9404 192.168.49.0/24: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee21cb6633ad94ef131afc36c4820aa5e65e76bed05b705d9e9c87eee3286fac (br-ee21cb6633ad): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:26:51.286398    9528 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee21cb6633ad94ef131afc36c4820aa5e65e76bed05b705d9e9c87eee3286fac (br-ee21cb6633ad): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ee21cb6633ad94ef131afc36c4820aa5e65e76bed05b705d9e9c87eee3286fac (br-ee21cb6633ad): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:26:51.299596    9528 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:26:52.407294    9528 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1076853s)
	I0601 11:26:52.415629    9528 cli_runner.go:164] Run: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:26:53.482092    9528 cli_runner.go:211] docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:26:53.482092    9528 cli_runner.go:217] Completed: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: (1.066451s)
	I0601 11:26:53.482092    9528 client.go:171] LocalClient.Create took 6.5473691s
	I0601 11:26:55.502841    9528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:26:55.509855    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:26:56.629556    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:56.629625    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1196883s)
	I0601 11:26:56.629625    9528 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:56.812172    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:26:57.945237    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:57.945237    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1330518s)
	W0601 11:26:57.945237    9528 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:26:57.945237    9528 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:57.956235    9528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:26:57.965095    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:26:59.063306    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:26:59.064405    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0981991s)
	I0601 11:26:59.065633    9528 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:26:59.275979    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:00.383322    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:00.383322    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1073312s)
	W0601 11:27:00.383322    9528 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:00.383322    9528 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:00.383322    9528 start.go:134] duration metric: createHost completed in 13.4567319s
	I0601 11:27:00.394372    9528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:27:00.403223    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:01.511166    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:01.511166    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1079302s)
	I0601 11:27:01.511166    9528 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:01.858158    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:02.954353    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:02.954353    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0961828s)
	W0601 11:27:02.954353    9528 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:02.954353    9528 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:02.964297    9528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:27:02.970305    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:04.078782    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:04.078782    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1084642s)
	I0601 11:27:04.078782    9528 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:04.320136    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:05.411600    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:05.411600    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0912694s)
	W0601 11:27:05.411600    9528 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:05.411600    9528 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:05.411600    9528 fix.go:57] fixHost completed within 50.2244001s
	I0601 11:27:05.411600    9528 start.go:81] releasing machines lock for "embed-certs-20220601112350-9404", held for 50.2244001s
	W0601 11:27:05.411600    9528 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	W0601 11:27:05.412360    9528 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	I0601 11:27:05.412416    9528 start.go:614] Will try again in 5 seconds ...
	I0601 11:27:10.425299    9528 start.go:352] acquiring machines lock for embed-certs-20220601112350-9404: {Name:mkab52c380d7df2e54eb0e0135a3345b8a4ef27b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:27:10.425609    9528 start.go:356] acquired machines lock for "embed-certs-20220601112350-9404" in 221.4µs
	I0601 11:27:10.425804    9528 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:27:10.425873    9528 fix.go:55] fixHost starting: 
	I0601 11:27:10.441550    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:11.516253    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:11.516253    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0746904s)
	I0601 11:27:11.516253    9528 fix.go:103] recreateIfNeeded on embed-certs-20220601112350-9404: state= err=unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:11.516253    9528 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:27:11.521767    9528 out.go:177] * docker "embed-certs-20220601112350-9404" container is missing, will recreate.
	I0601 11:27:11.523986    9528 delete.go:124] DEMOLISHING embed-certs-20220601112350-9404 ...
	I0601 11:27:11.539490    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:12.618922    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:12.618972    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0792074s)
	W0601 11:27:12.619034    9528 stop.go:75] unable to get state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:12.619088    9528 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:12.633311    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:13.721326    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:13.721583    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0878274s)
	I0601 11:27:13.721723    9528 delete.go:82] Unable to get host status for embed-certs-20220601112350-9404, assuming it has already been deleted: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:13.728150    9528 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:27:14.823945    9528 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:14.823945    9528 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.0957832s)
	I0601 11:27:14.823945    9528 kic.go:356] could not find the container embed-certs-20220601112350-9404 to remove it. will try anyways
	I0601 11:27:14.833324    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:15.909827    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:15.909827    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0764907s)
	W0601 11:27:15.909827    9528 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:15.918085    9528 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0"
	W0601 11:27:17.012648    9528 cli_runner.go:211] docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:27:17.012694    9528 cli_runner.go:217] Completed: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": (1.0942746s)
	I0601 11:27:17.012694    9528 oci.go:625] error shutdown embed-certs-20220601112350-9404: docker exec --privileged -t embed-certs-20220601112350-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:18.030357    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:19.148732    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:19.148732    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.118362s)
	I0601 11:27:19.148732    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:19.148732    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:19.148732    9528 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:19.648298    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:20.771618    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:20.771810    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1233069s)
	I0601 11:27:20.771967    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:20.771967    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:20.772041    9528 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:21.377848    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:22.482640    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:22.491252    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1047788s)
	I0601 11:27:22.491252    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:22.491252    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:22.491252    9528 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:23.396063    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:24.476688    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:24.476922    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0805762s)
	I0601 11:27:24.477028    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:24.477101    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:24.477140    9528 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:26.478767    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:27.617643    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:27.617697    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1387149s)
	I0601 11:27:27.617801    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:27.617866    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:27.617866    9528 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:29.452317    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:30.556722    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:30.556722    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.1043925s)
	I0601 11:27:30.556722    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:30.556722    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:30.556722    9528 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:33.241818    9528 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:27:34.338555    9528 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:34.338555    9528 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (1.0966607s)
	I0601 11:27:34.338555    9528 oci.go:637] temporary error verifying shutdown: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:34.338555    9528 oci.go:639] temporary error: container embed-certs-20220601112350-9404 status is  but expect it to be exited
	I0601 11:27:34.338555    9528 oci.go:88] couldn't shut down embed-certs-20220601112350-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	 
	I0601 11:27:34.345947    9528 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220601112350-9404
	I0601 11:27:35.479277    9528 cli_runner.go:217] Completed: docker rm -f -v embed-certs-20220601112350-9404: (1.1333179s)
	I0601 11:27:35.486862    9528 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404
	W0601 11:27:36.544560    9528 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:36.544560    9528 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} embed-certs-20220601112350-9404: (1.0576861s)
	I0601 11:27:36.550563    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:27:37.636949    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:27:37.637015    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0862991s)
	I0601 11:27:37.645456    9528 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:27:37.645456    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:27:38.715525    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:38.715525    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.070057s)
	I0601 11:27:38.715525    9528 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:27:38.715525    9528 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	W0601 11:27:38.716359    9528 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:27:38.716359    9528 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:27:39.726460    9528 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:27:39.731687    9528 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:27:39.731780    9528 start.go:165] libmachine.API.Create for "embed-certs-20220601112350-9404" (driver="docker")
	I0601 11:27:39.731780    9528 client.go:168] LocalClient.Create starting
	I0601 11:27:39.732492    9528 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:27:39.732736    9528 main.go:134] libmachine: Decoding PEM data...
	I0601 11:27:39.732820    9528 main.go:134] libmachine: Parsing certificate...
	I0601 11:27:39.732965    9528 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:27:39.733207    9528 main.go:134] libmachine: Decoding PEM data...
	I0601 11:27:39.733305    9528 main.go:134] libmachine: Parsing certificate...
	I0601 11:27:39.746144    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:27:40.853214    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:27:40.853214    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1070568s)
	I0601 11:27:40.861437    9528 network_create.go:272] running [docker network inspect embed-certs-20220601112350-9404] to gather additional debugging logs...
	I0601 11:27:40.861437    9528 cli_runner.go:164] Run: docker network inspect embed-certs-20220601112350-9404
	W0601 11:27:41.967353    9528 cli_runner.go:211] docker network inspect embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:41.967353    9528 cli_runner.go:217] Completed: docker network inspect embed-certs-20220601112350-9404: (1.1059035s)
	I0601 11:27:41.967353    9528 network_create.go:275] error running [docker network inspect embed-certs-20220601112350-9404]: docker network inspect embed-certs-20220601112350-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220601112350-9404
	I0601 11:27:41.967353    9528 network_create.go:277] output of [docker network inspect embed-certs-20220601112350-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220601112350-9404
	
	** /stderr **
	I0601 11:27:41.974346    9528 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:27:43.067162    9528 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0928034s)
	I0601 11:27:43.086865    9528 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001244e8] amended:false}} dirty:map[] misses:0}
	I0601 11:27:43.087433    9528 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:27:43.107703    9528 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001244e8] amended:true}} dirty:map[192.168.49.0:0xc0001244e8 192.168.58.0:0xc0007901f0] misses:0}
	I0601 11:27:43.107703    9528 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:27:43.107703    9528 network_create.go:115] attempt to create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:27:43.120052    9528 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404
	W0601 11:27:44.235749    9528 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:44.235749    9528 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: (1.1156843s)
	E0601 11:27:44.235749    9528 network_create.go:104] error while trying to create docker network embed-certs-20220601112350-9404 192.168.58.0/24: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f7c0bef8a3ff9707e8711be8afb3a94b97c2f2b5e6625e2c9c7d55ea08d3eaf (br-8f7c0bef8a3f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:27:44.235749    9528 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f7c0bef8a3ff9707e8711be8afb3a94b97c2f2b5e6625e2c9c7d55ea08d3eaf (br-8f7c0bef8a3f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220601112350-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8f7c0bef8a3ff9707e8711be8afb3a94b97c2f2b5e6625e2c9c7d55ea08d3eaf (br-8f7c0bef8a3f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:27:44.248738    9528 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:27:45.356945    9528 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1081947s)
	I0601 11:27:45.362954    9528 cli_runner.go:164] Run: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:27:46.440796    9528 cli_runner.go:211] docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:27:46.440796    9528 cli_runner.go:217] Completed: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0778017s)
	I0601 11:27:46.440796    9528 client.go:171] LocalClient.Create took 6.7089398s
	I0601 11:27:48.468508    9528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:27:48.475512    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:49.589788    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:49.589788    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1142632s)
	I0601 11:27:49.589788    9528 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:49.871311    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:50.935402    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:50.935402    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0640783s)
	W0601 11:27:50.935402    9528 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:50.935402    9528 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:50.945715    9528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:27:50.952150    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:52.022441    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:52.022441    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0702779s)
	I0601 11:27:52.022441    9528 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:52.234986    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:53.321540    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:53.321540    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0865423s)
	W0601 11:27:53.321540    9528 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:53.321540    9528 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:53.321540    9528 start.go:134] duration metric: createHost completed in 13.5949254s
	I0601 11:27:53.330535    9528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:27:53.337536    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:54.432749    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:54.432749    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0952009s)
	I0601 11:27:54.432749    9528 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:54.767135    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:55.863887    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:55.863887    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0966416s)
	W0601 11:27:55.863887    9528 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:55.863887    9528 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:55.876161    9528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:27:55.882132    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:57.022106    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:57.022106    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.1399613s)
	I0601 11:27:57.022106    9528 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:57.378090    9528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404
	W0601 11:27:58.471990    9528 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404 returned with exit code 1
	I0601 11:27:58.471990    9528 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: (1.0938874s)
	W0601 11:27:58.472254    9528 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:27:58.472254    9528 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220601112350-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601112350-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	I0601 11:27:58.472254    9528 fix.go:57] fixHost completed within 48.0458344s
	I0601 11:27:58.472254    9528 start.go:81] releasing machines lock for "embed-certs-20220601112350-9404", held for 48.0460987s
	W0601 11:27:58.473010    9528 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220601112350-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220601112350-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	I0601 11:27:58.477190    9528 out.go:177] 
	W0601 11:27:58.479494    9528 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220601112350-9404 container: docker volume create embed-certs-20220601112350-9404 --label name.minikube.sigs.k8s.io=embed-certs-20220601112350-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220601112350-9404: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220601112350-9404': mkdir /var/lib/docker/volumes/embed-certs-20220601112350-9404: read-only file system
	
	W0601 11:27:58.479494    9528 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:27:58.479494    9528 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:27:58.482683    9528 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20220601112350-9404 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.2168855s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.036783s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:02.938271    4912 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (118.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220601112246-9404" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1273974s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9918579s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:02.527668    4512 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220601112246-9404" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220601112246-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601112246-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (262.5832ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601112246-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220601112246-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1654938s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (3.0115241s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:06.982891    7376 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (4.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220601112246-9404 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220601112246-9404 "sudo crictl images -o json": exit status 80 (3.2069219s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220601112246-9404 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.16.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.1018971s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9570197s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:14.256137    3800 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (11.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=1: exit status 80 (3.2385326s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:27:14.520113    7292 out.go:296] Setting OutFile to fd 1528 ...
	I0601 11:27:14.581830    7292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:14.581830    7292 out.go:309] Setting ErrFile to fd 1748...
	I0601 11:27:14.581830    7292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:14.592814    7292 out.go:303] Setting JSON to false
	I0601 11:27:14.592814    7292 mustload.go:65] Loading cluster: old-k8s-version-20220601112246-9404
	I0601 11:27:14.593371    7292 config.go:178] Loaded profile config "old-k8s-version-20220601112246-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:27:14.609005    7292 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}
	W0601 11:27:17.228927    7292 cli_runner.go:211] docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:27:17.229002    7292 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: (2.6196955s)
	I0601 11:27:17.233375    7292 out.go:177] 
	W0601 11:27:17.235607    7292 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404
	
	W0601 11:27:17.235607    7292 out.go:239] * 
	* 
	W0601 11:27:17.498488    7292 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:27:17.501370    7292 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220601112246-9404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.163602s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (3.0276037s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:21.695446    8720 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601112246-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220601112246-9404: exit status 1 (1.2179439s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220601112246-9404 -n old-k8s-version-20220601112246-9404: exit status 7 (2.9484629s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:25.870517    6240 status.go:247] status error: host: state: unknown state "old-k8s-version-20220601112246-9404": docker container inspect old-k8s-version-20220601112246-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220601112246-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601112246-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (11.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220601112334-9404" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1250176s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (2.9512721s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:51.424230    2968 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (82.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m18.0511611s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220601112749-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220601112749-9404 in cluster default-k8s-different-port-20220601112749-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:27:49.838813    4228 out.go:296] Setting OutFile to fd 1912 ...
	I0601 11:27:49.910924    4228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:49.910924    4228 out.go:309] Setting ErrFile to fd 1600...
	I0601 11:27:49.910924    4228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:49.923811    4228 out.go:303] Setting JSON to false
	I0601 11:27:49.926797    4228 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14805,"bootTime":1654068064,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:27:49.926797    4228 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:27:49.929804    4228 out.go:177] * [default-k8s-different-port-20220601112749-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:27:49.932803    4228 notify.go:193] Checking for updates...
	I0601 11:27:49.935755    4228 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:27:49.937760    4228 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:27:49.939749    4228 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:27:49.942756    4228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:27:49.945812    4228 config.go:178] Loaded profile config "cert-expiration-20220601112128-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:49.946769    4228 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:49.946769    4228 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:49.946769    4228 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:49.946769    4228 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:27:52.686160    4228 docker.go:137] docker version: linux-20.10.14
	I0601 11:27:52.694170    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:27:54.870938    4228 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1767425s)
	I0601 11:27:54.870938    4228 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:27:53.783473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:27:54.874942    4228 out.go:177] * Using the docker driver based on user configuration
	I0601 11:27:54.879955    4228 start.go:284] selected driver: docker
	I0601 11:27:54.879955    4228 start.go:806] validating driver "docker" against <nil>
	I0601 11:27:54.879955    4228 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:27:55.008724    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:27:57.131080    4228 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1223321s)
	I0601 11:27:57.131080    4228 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:27:56.0662183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:27:57.131080    4228 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:27:57.132104    4228 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:27:57.136085    4228 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:27:57.138081    4228 cni.go:95] Creating CNI manager for ""
	I0601 11:27:57.138081    4228 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:27:57.138081    4228 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601112749-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601112749-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:27:57.142092    4228 out.go:177] * Starting control plane node default-k8s-different-port-20220601112749-9404 in cluster default-k8s-different-port-20220601112749-9404
	I0601 11:27:57.144081    4228 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:27:57.147084    4228 out.go:177] * Pulling base image ...
	I0601 11:27:57.149078    4228 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:27:57.149078    4228 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:27:57.149078    4228 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:27:57.149078    4228 cache.go:57] Caching tarball of preloaded images
	I0601 11:27:57.150082    4228 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:27:57.150082    4228 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:27:57.150082    4228 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220601112749-9404\config.json ...
	I0601 11:27:57.150082    4228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220601112749-9404\config.json: {Name:mk5d38681bb2c8ccc8433974ce4605bde6624372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:27:58.221664    4228 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:27:58.221664    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:27:58.221664    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:27:58.221664    4228 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:27:58.221664    4228 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:27:58.221664    4228 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:27:58.221664    4228 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:27:58.221664    4228 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:27:58.221664    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:00.639642    4228 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:28:00.639759    4228 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:28:00.639874    4228 start.go:352] acquiring machines lock for default-k8s-different-port-20220601112749-9404: {Name:mk2d253a747261ca3a979b7941df8cd2b45f4516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:00.640186    4228 start.go:356] acquired machines lock for "default-k8s-different-port-20220601112749-9404" in 228.4µs
	I0601 11:28:00.640418    4228 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220601112749-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220601112749-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:28:00.640599    4228 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:00.643959    4228 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:28:00.644173    4228 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601112749-9404" (driver="docker")
	I0601 11:28:00.644173    4228 client.go:168] LocalClient.Create starting
	I0601 11:28:00.644771    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:00.644771    4228 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:00.645299    4228 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:00.645628    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:00.645628    4228 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:00.645628    4228 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:00.656227    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:01.770942    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:01.770942    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1146168s)
	I0601 11:28:01.778917    4228 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:28:01.778917    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:28:02.859264    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:02.859264    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.0803352s)
	I0601 11:28:02.859264    4228 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:28:02.859264    4228 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	I0601 11:28:02.865238    4228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:28:03.960422    4228 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0951709s)
	I0601 11:28:03.980435    4228 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000dba068] misses:0}
	I0601 11:28:03.980435    4228 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:03.980435    4228 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:28:03.988973    4228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404
	W0601 11:28:05.078837    4228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:05.078837    4228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: (1.089667s)
	E0601 11:28:05.078837    4228 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 979453d9341b1c0a4cd16ff505d0e073f930db946f7a93bdd771833ba22bc4ff (br-979453d9341b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:28:05.078837    4228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 979453d9341b1c0a4cd16ff505d0e073f930db946f7a93bdd771833ba22bc4ff (br-979453d9341b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 979453d9341b1c0a4cd16ff505d0e073f930db946f7a93bdd771833ba22bc4ff (br-979453d9341b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:28:05.092422    4228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:28:06.173543    4228 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0811085s)
	I0601 11:28:06.180546    4228 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:28:07.297528    4228 cli_runner.go:211] docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:28:07.297613    4228 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1169691s)
	I0601 11:28:07.297836    4228 client.go:171] LocalClient.Create took 6.6535474s
	I0601 11:28:09.320640    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:28:09.328522    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:28:10.413415    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:10.413415    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0846337s)
	I0601 11:28:10.413415    4228 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:10.701208    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:28:11.824096    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:11.824416    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1228757s)
	W0601 11:28:11.824648    4228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:28:11.824712    4228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:11.835860    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:28:11.842736    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:28:12.937271    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:12.937271    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0943221s)
	I0601 11:28:12.937435    4228 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:13.246899    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:28:14.335931    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:14.336058    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0888562s)
	W0601 11:28:14.336189    4228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:28:14.336282    4228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:14.336282    4228 start.go:134] duration metric: createHost completed in 13.6955267s
	I0601 11:28:14.336282    4228 start.go:81] releasing machines lock for "default-k8s-different-port-20220601112749-9404", held for 13.6958846s
	W0601 11:28:14.336456    4228 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	I0601 11:28:14.350531    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:15.463148    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:15.463148    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1116054s)
	I0601 11:28:15.463148    4228 delete.go:82] Unable to get host status for default-k8s-different-port-20220601112749-9404, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:28:15.463148    4228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	I0601 11:28:15.463148    4228 start.go:614] Will try again in 5 seconds ...
	I0601 11:28:20.472964    4228 start.go:352] acquiring machines lock for default-k8s-different-port-20220601112749-9404: {Name:mk2d253a747261ca3a979b7941df8cd2b45f4516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:20.473109    4228 start.go:356] acquired machines lock for "default-k8s-different-port-20220601112749-9404" in 0s
	I0601 11:28:20.473109    4228 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:28:20.473109    4228 fix.go:55] fixHost starting: 
	I0601 11:28:20.511034    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:21.583130    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:21.583130    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0720841s)
	I0601 11:28:21.583130    4228 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601112749-9404: state= err=unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:21.583130    4228 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:28:21.588154    4228 out.go:177] * docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	I0601 11:28:21.591130    4228 delete.go:124] DEMOLISHING default-k8s-different-port-20220601112749-9404 ...
	I0601 11:28:21.603130    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:22.696477    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:22.696477    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0933342s)
	W0601 11:28:22.696477    4228 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:22.697007    4228 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:22.711157    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:23.818531    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:23.818531    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1070995s)
	I0601 11:28:23.818531    4228 delete.go:82] Unable to get host status for default-k8s-different-port-20220601112749-9404, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:23.824965    4228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:28:24.913350    4228 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:24.917932    4228 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.0883721s)
	I0601 11:28:24.917932    4228 kic.go:356] could not find the container default-k8s-different-port-20220601112749-9404 to remove it. will try anyways
	I0601 11:28:24.925053    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:26.030668    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:26.030668    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1056033s)
	W0601 11:28:26.030668    4228 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:26.038970    4228 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0"
	W0601 11:28:27.193723    4228 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:28:27.193723    4228 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": (1.1547394s)
	I0601 11:28:27.193723    4228 oci.go:625] error shutdown default-k8s-different-port-20220601112749-9404: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:28.207488    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:29.288905    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:29.288905    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0814046s)
	I0601 11:28:29.288905    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:29.288905    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:29.288905    4228 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:29.772983    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:30.929251    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:30.929251    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1562558s)
	I0601 11:28:30.929251    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:30.929251    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:30.929251    4228 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:31.835522    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:32.926988    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:32.926988    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0913414s)
	I0601 11:28:32.926988    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:32.926988    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:32.926988    4228 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:33.577145    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:34.700213    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:34.700213    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1230556s)
	I0601 11:28:34.700496    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:34.700496    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:34.700496    4228 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:35.827398    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:36.932770    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:36.932770    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1052591s)
	I0601 11:28:36.932863    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:36.932933    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:36.932933    4228 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:38.465693    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:39.582382    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:39.582462    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1166235s)
	I0601 11:28:39.582462    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:39.582462    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:39.582586    4228 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:42.645592    4228 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:28:43.731499    4228 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:43.731586    4228 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0853225s)
	I0601 11:28:43.731586    4228 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:43.731586    4228 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:28:43.731586    4228 oci.go:88] couldn't shut down default-k8s-different-port-20220601112749-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	 
	I0601 11:28:43.738588    4228 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220601112749-9404
	I0601 11:28:44.821293    4228 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220601112749-9404: (1.0825217s)
	I0601 11:28:44.827885    4228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:28:45.922160    4228 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:45.922160    4228 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.094262s)
	I0601 11:28:45.928556    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:46.991862    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:46.991862    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0632944s)
	I0601 11:28:46.999067    4228 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:28:46.999067    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:28:48.104377    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:48.104618    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.1052981s)
	I0601 11:28:48.104618    4228 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:28:48.104669    4228 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	W0601 11:28:48.105424    4228 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:28:48.105424    4228 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:28:49.116047    4228 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:49.137702    4228 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:28:49.138081    4228 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601112749-9404" (driver="docker")
	I0601 11:28:49.138081    4228 client.go:168] LocalClient.Create starting
	I0601 11:28:49.138776    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:49.138776    4228 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:49.138776    4228 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:49.139375    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:49.139375    4228 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:49.139375    4228 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:49.147114    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:50.237781    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:50.237781    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0905996s)
	I0601 11:28:50.245918    4228 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:28:50.245918    4228 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:28:51.327579    4228 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:51.327579    4228 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.0816481s)
	I0601 11:28:51.327579    4228 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:28:51.327579    4228 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	I0601 11:28:51.333578    4228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:28:52.430371    4228 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0965124s)
	I0601 11:28:52.446708    4228 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000dba068] amended:false}} dirty:map[] misses:0}
	I0601 11:28:52.446708    4228 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:52.462363    4228 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000dba068] amended:true}} dirty:map[192.168.49.0:0xc000dba068 192.168.58.0:0xc0005aa288] misses:0}
	I0601 11:28:52.462363    4228 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:52.462363    4228 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:28:52.469454    4228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404
	W0601 11:28:53.540973    4228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:53.540973    4228 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: (1.0715064s)
	E0601 11:28:53.540973    4228 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d9b7dc97e4a5bfe75f3cb1118416a1814cb2f64ec5b48a096bd9eeda57092336 (br-d9b7dc97e4a5): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:28:53.540973    4228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d9b7dc97e4a5bfe75f3cb1118416a1814cb2f64ec5b48a096bd9eeda57092336 (br-d9b7dc97e4a5): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d9b7dc97e4a5bfe75f3cb1118416a1814cb2f64ec5b48a096bd9eeda57092336 (br-d9b7dc97e4a5): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:28:53.555185    4228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:28:54.690712    4228 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1354547s)
	I0601 11:28:54.697713    4228 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:28:55.782823    4228 cli_runner.go:211] docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:28:55.782823    4228 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0850975s)
	I0601 11:28:55.782823    4228 client.go:171] LocalClient.Create took 6.6446672s
	I0601 11:28:57.799434    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:28:57.805437    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:28:58.895365    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:28:58.895365    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0899155s)
	I0601 11:28:58.895365    4228 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:28:59.238613    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:00.368493    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:00.368542    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1296876s)
	W0601 11:29:00.368542    4228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:29:00.368542    4228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:00.376083    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:00.376083    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:01.443582    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:01.443659    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0672841s)
	I0601 11:29:01.443757    4228 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:01.671962    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:02.753363    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:02.753363    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.081389s)
	W0601 11:29:02.753363    4228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:29:02.753363    4228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:02.753363    4228 start.go:134] duration metric: createHost completed in 13.6371618s
	I0601 11:29:02.764344    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:02.772343    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:03.882493    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:03.882493    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1101376s)
	I0601 11:29:03.882493    4228 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:04.139132    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:05.219413    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:05.219413    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0802692s)
	W0601 11:29:05.219413    4228 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:29:05.219413    4228 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:05.232184    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:05.238667    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:06.319420    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:06.319420    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0807407s)
	I0601 11:29:06.319420    4228 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:06.533991    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:07.626852    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:07.626852    4228 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0927137s)
	W0601 11:29:07.627061    4228 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:29:07.627150    4228 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:07.627150    4228 fix.go:57] fixHost completed within 47.1535073s
	I0601 11:29:07.627150    4228 start.go:81] releasing machines lock for "default-k8s-different-port-20220601112749-9404", held for 47.1535073s
	W0601 11:29:07.627619    4228 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220601112749-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220601112749-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	I0601 11:29:07.631802    4228 out.go:177] 
	W0601 11:29:07.633929    4228 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	W0601 11:29:07.633929    4228 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:29:07.634872    4228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:29:07.637748    4228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1584085s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.9790739s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:11.896059    1960 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (82.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220601112334-9404" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220601112334-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context no-preload-20220601112334-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (248.5983ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220601112334-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220601112334-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1543539s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0543048s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:55.894417    9428 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (82.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m17.7907602s)

                                                
                                                
-- stdout --
	* [newest-cni-20220601112753-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node newest-cni-20220601112753-9404 in cluster newest-cni-20220601112753-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:27:54.059083    6224 out.go:296] Setting OutFile to fd 1808 ...
	I0601 11:27:54.117812    6224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:54.117812    6224 out.go:309] Setting ErrFile to fd 1472...
	I0601 11:27:54.117812    6224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:27:54.133267    6224 out.go:303] Setting JSON to false
	I0601 11:27:54.136328    6224 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14809,"bootTime":1654068065,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:27:54.136328    6224 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:27:54.140679    6224 out.go:177] * [newest-cni-20220601112753-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:27:54.144269    6224 notify.go:193] Checking for updates...
	I0601 11:27:54.147447    6224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:27:54.149656    6224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:27:54.152023    6224 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:27:54.154325    6224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:27:54.157632    6224 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:54.157632    6224 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:54.158210    6224 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:27:54.158210    6224 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:27:56.921320    6224 docker.go:137] docker version: linux-20.10.14
	I0601 11:27:56.931192    6224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:27:59.100653    6224 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.169383s)
	I0601 11:27:59.100653    6224 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:27:57.9868349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:27:59.106656    6224 out.go:177] * Using the docker driver based on user configuration
	I0601 11:27:59.108653    6224 start.go:284] selected driver: docker
	I0601 11:27:59.108653    6224 start.go:806] validating driver "docker" against <nil>
	I0601 11:27:59.108653    6224 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:27:59.170952    6224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:28:01.363585    6224 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1925503s)
	I0601 11:28:01.363968    6224 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-01 11:28:00.2852569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:28:01.364271    6224 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	W0601 11:28:01.364340    6224 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0601 11:28:01.364933    6224 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 11:28:01.369103    6224 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:28:01.373106    6224 cni.go:95] Creating CNI manager for ""
	I0601 11:28:01.373106    6224 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:28:01.373106    6224 start_flags.go:306] config:
	{Name:newest-cni-20220601112753-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601112753-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false}
	I0601 11:28:01.376488    6224 out.go:177] * Starting control plane node newest-cni-20220601112753-9404 in cluster newest-cni-20220601112753-9404
	I0601 11:28:01.377917    6224 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:28:01.380913    6224 out.go:177] * Pulling base image ...
	I0601 11:28:01.383910    6224 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:28:01.383910    6224 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:28:01.383910    6224 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:28:01.383910    6224 cache.go:57] Caching tarball of preloaded images
	I0601 11:28:01.383910    6224 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:28:01.383910    6224 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:28:01.384904    6224 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220601112753-9404\config.json ...
	I0601 11:28:01.384904    6224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220601112753-9404\config.json: {Name:mk9c5a876ab58de64feae224c237d5288ed65fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:28:02.463365    6224 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:28:02.463365    6224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:02.463365    6224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:02.463365    6224 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:28:02.463885    6224 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:28:02.463982    6224 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:28:02.464044    6224 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:28:02.464044    6224 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:28:02.464044    6224 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:04.814646    6224 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:28:04.814707    6224 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:28:04.814707    6224 start.go:352] acquiring machines lock for newest-cni-20220601112753-9404: {Name:mka9c5833b483068b0a73f6342d879a5ebe04326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:04.814707    6224 start.go:356] acquired machines lock for "newest-cni-20220601112753-9404" in 0s
	I0601 11:28:04.814707    6224 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220601112753-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601112753-9404 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikub
e2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:28:04.815356    6224 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:04.820582    6224 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:28:04.820640    6224 start.go:165] libmachine.API.Create for "newest-cni-20220601112753-9404" (driver="docker")
	I0601 11:28:04.820640    6224 client.go:168] LocalClient.Create starting
	I0601 11:28:04.820640    6224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:04.821827    6224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:04.821827    6224 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:04.821827    6224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:04.822413    6224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:04.822413    6224 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:04.831040    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:05.908970    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:05.908970    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0778115s)
	I0601 11:28:05.914967    6224 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:28:05.914967    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:28:07.016011    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:07.016074    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.1010314s)
	I0601 11:28:07.016074    6224 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:28:07.016074    6224 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	I0601 11:28:07.022968    6224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:28:08.130128    6224 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1070418s)
	I0601 11:28:08.152827    6224 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00072a038] misses:0}
	I0601 11:28:08.152827    6224 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:08.152827    6224 network_create.go:115] attempt to create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:28:08.159757    6224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404
	W0601 11:28:09.261205    6224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:09.261302    6224 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: (1.1012517s)
	E0601 11:28:09.261474    6224 network_create.go:104] error while trying to create docker network newest-cni-20220601112753-9404 192.168.49.0/24: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c2d55494a7ae7ed22a971cf4ed55a77b9c715e738a56ca3ff4f39bf6eaedcba3 (br-c2d55494a7ae): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:28:09.261474    6224 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c2d55494a7ae7ed22a971cf4ed55a77b9c715e738a56ca3ff4f39bf6eaedcba3 (br-c2d55494a7ae): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c2d55494a7ae7ed22a971cf4ed55a77b9c715e738a56ca3ff4f39bf6eaedcba3 (br-c2d55494a7ae): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:28:09.275675    6224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:28:10.444496    6224 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1687486s)
	I0601 11:28:10.451405    6224 cli_runner.go:164] Run: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:28:11.545867    6224 cli_runner.go:211] docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:28:11.545934    6224 cli_runner.go:217] Completed: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0943358s)
	I0601 11:28:11.546038    6224 client.go:171] LocalClient.Create took 6.7252713s
	I0601 11:28:13.566364    6224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:28:13.573019    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:28:14.662679    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:14.662679    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0896473s)
	I0601 11:28:14.662679    6224 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:14.958497    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:28:16.051010    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:16.051078    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0923474s)
	W0601 11:28:16.051285    6224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:28:16.051285    6224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:16.060762    6224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:28:16.067677    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:28:17.135230    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:17.135230    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0675417s)
	I0601 11:28:17.135230    6224 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:17.444300    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:28:18.525537    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:18.525537    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0812244s)
	W0601 11:28:18.525537    6224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:28:18.525537    6224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:18.525537    6224 start.go:134] duration metric: createHost completed in 13.7099388s
	I0601 11:28:18.525537    6224 start.go:81] releasing machines lock for "newest-cni-20220601112753-9404", held for 13.7106735s
	W0601 11:28:18.525537    6224 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	I0601 11:28:18.544538    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:19.625544    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:19.625544    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0809947s)
	I0601 11:28:19.625544    6224 delete.go:82] Unable to get host status for newest-cni-20220601112753-9404, assuming it has already been deleted: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:28:19.625544    6224 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	I0601 11:28:19.625544    6224 start.go:614] Will try again in 5 seconds ...
	I0601 11:28:24.626541    6224 start.go:352] acquiring machines lock for newest-cni-20220601112753-9404: {Name:mka9c5833b483068b0a73f6342d879a5ebe04326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:24.626541    6224 start.go:356] acquired machines lock for "newest-cni-20220601112753-9404" in 0s
	I0601 11:28:24.626541    6224 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:28:24.627059    6224 fix.go:55] fixHost starting: 
	I0601 11:28:24.641575    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:25.718747    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:25.718928    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0771602s)
	I0601 11:28:25.719031    6224 fix.go:103] recreateIfNeeded on newest-cni-20220601112753-9404: state= err=unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:25.719031    6224 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:28:25.721978    6224 out.go:177] * docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	I0601 11:28:25.724307    6224 delete.go:124] DEMOLISHING newest-cni-20220601112753-9404 ...
	I0601 11:28:25.736000    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:26.842402    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:26.842402    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1063897s)
	W0601 11:28:26.842402    6224 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:26.842402    6224 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:26.858393    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:27.963239    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:27.963239    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1048334s)
	I0601 11:28:27.963239    6224 delete.go:82] Unable to get host status for newest-cni-20220601112753-9404, assuming it has already been deleted: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:27.970241    6224 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:28:29.052424    6224 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:29.052424    6224 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.0821711s)
	I0601 11:28:29.052424    6224 kic.go:356] could not find the container newest-cni-20220601112753-9404 to remove it. will try anyways
	I0601 11:28:29.059442    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:30.160003    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:30.160003    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1005482s)
	W0601 11:28:30.160003    6224 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:30.166006    6224 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0"
	W0601 11:28:31.245554    6224 cli_runner.go:211] docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:28:31.245554    6224 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": (1.0795355s)
	I0601 11:28:31.245554    6224 oci.go:625] error shutdown newest-cni-20220601112753-9404: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:32.261318    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:33.366683    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:33.366735    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1047114s)
	I0601 11:28:33.366839    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:33.366874    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:33.366874    6224 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:33.843861    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:34.934567    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:34.934567    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0906937s)
	I0601 11:28:34.934567    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:34.934567    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:34.934567    6224 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:35.841982    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:36.901510    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:36.901510    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0595158s)
	I0601 11:28:36.901510    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:36.901510    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:36.901510    6224 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:37.566195    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:38.643370    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:38.643370    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0770421s)
	I0601 11:28:38.643498    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:38.643498    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:38.643583    6224 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:39.760000    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:40.827020    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:40.827020    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0668546s)
	I0601 11:28:40.827020    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:40.827020    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:40.827020    6224 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:42.346709    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:43.479499    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:43.479548    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1327376s)
	I0601 11:28:43.479696    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:43.479696    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:43.479696    6224 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:46.543422    6224 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:28:47.679684    6224 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:47.679809    6224 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1361275s)
	I0601 11:28:47.679809    6224 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:28:47.679809    6224 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:28:47.679809    6224 oci.go:88] couldn't shut down newest-cni-20220601112753-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	 
	I0601 11:28:47.686280    6224 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220601112753-9404
	I0601 11:28:48.782688    6224 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220601112753-9404: (1.0963964s)
	I0601 11:28:48.790842    6224 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:28:49.863062    6224 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:49.863062    6224 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.0721528s)
	I0601 11:28:49.871360    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:50.981112    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:50.981112    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1095625s)
	I0601 11:28:50.989109    6224 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:28:50.989109    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:28:52.084825    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:52.084825    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.0957038s)
	I0601 11:28:52.084825    6224 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:28:52.084825    6224 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	W0601 11:28:52.085796    6224 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:28:52.085796    6224 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:28:53.100442    6224 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:53.107284    6224 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:28:53.107553    6224 start.go:165] libmachine.API.Create for "newest-cni-20220601112753-9404" (driver="docker")
	I0601 11:28:53.107683    6224 client.go:168] LocalClient.Create starting
	I0601 11:28:53.107723    6224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:53.108267    6224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:53.108348    6224 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:53.108559    6224 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:53.108788    6224 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:53.108834    6224 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:53.116483    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:54.201737    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:54.201737    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0852413s)
	I0601 11:28:54.209081    6224 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:28:54.209081    6224 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:28:55.322291    6224 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:55.322291    6224 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.113198s)
	I0601 11:28:55.322291    6224 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:28:55.322291    6224 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	I0601 11:28:55.330289    6224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:28:56.428457    6224 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0981551s)
	I0601 11:28:56.444431    6224 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00072a038] amended:false}} dirty:map[] misses:0}
	I0601 11:28:56.444431    6224 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:56.459448    6224 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00072a038] amended:true}} dirty:map[192.168.49.0:0xc00072a038 192.168.58.0:0xc000006ad0] misses:0}
	I0601 11:28:56.460350    6224 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:56.460473    6224 network_create.go:115] attempt to create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:28:56.467730    6224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404
	W0601 11:28:57.538349    6224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:28:57.538349    6224 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: (1.0706073s)
	E0601 11:28:57.538349    6224 network_create.go:104] error while trying to create docker network newest-cni-20220601112753-9404 192.168.58.0/24: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57be3db4f1512d81edee64e4082c56d9ab3a7cc61f1612952096fbaa58409ede (br-57be3db4f151): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:28:57.538349    6224 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57be3db4f1512d81edee64e4082c56d9ab3a7cc61f1612952096fbaa58409ede (br-57be3db4f151): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57be3db4f1512d81edee64e4082c56d9ab3a7cc61f1612952096fbaa58409ede (br-57be3db4f151): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:28:57.550349    6224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:28:58.633005    6224 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0825916s)
	I0601 11:28:58.640797    6224 cli_runner.go:164] Run: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:28:59.702645    6224 cli_runner.go:211] docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:28:59.702645    6224 cli_runner.go:217] Completed: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0618365s)
	I0601 11:28:59.702645    6224 client.go:171] LocalClient.Create took 6.5948876s
	I0601 11:29:01.721935    6224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:01.729284    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:02.785331    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:02.785331    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0560348s)
	I0601 11:29:02.785331    6224 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:03.136051    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:04.291069    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:04.291069    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1550048s)
	W0601 11:29:04.291069    6224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:29:04.291069    6224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:04.300075    6224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:04.306065    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:05.373746    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:05.373746    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0676032s)
	I0601 11:29:05.373746    6224 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:05.606863    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:06.695883    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:06.695883    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0890077s)
	W0601 11:29:06.695883    6224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:29:06.695883    6224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:06.695883    6224 start.go:134] duration metric: createHost completed in 13.5950339s
	I0601 11:29:06.704880    6224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:06.710878    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:07.797551    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:07.797551    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0866611s)
	I0601 11:29:07.797551    6224 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:08.059015    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:09.154124    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:09.154181    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0949805s)
	W0601 11:29:09.154507    6224 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:29:09.154536    6224 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:09.165238    6224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:09.171835    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:10.257089    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:10.257089    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0852412s)
	I0601 11:29:10.257089    6224 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:10.468057    6224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:11.560025    6224 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:11.560084    6224 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0917953s)
	W0601 11:29:11.560084    6224 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:29:11.560084    6224 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:11.560084    6224 fix.go:57] fixHost completed within 46.9330125s
	I0601 11:29:11.560084    6224 start.go:81] releasing machines lock for "newest-cni-20220601112753-9404", held for 46.9330125s
	W0601 11:29:11.560837    6224 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220601112753-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220601112753-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	I0601 11:29:11.565991    6224 out.go:177] 
	W0601 11:29:11.568361    6224 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	W0601 11:29:11.568428    6224 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:29:11.568428    6224 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:29:11.571663    6224 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1625389s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (3.0326512s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:15.870512    1168 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (82.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220601112334-9404 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p no-preload-20220601112334-9404 "sudo crictl images -o json": exit status 80 (3.3025514s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p no-preload-20220601112334-9404 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.159377s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0103157s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:03.393558    7236 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220601112350-9404" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1878977s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.084927s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:07.218944    9208 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (4.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (11.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220601112334-9404 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-20220601112334-9404 --alsologtostderr -v=1: exit status 80 (3.2566617s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:28:03.655154    1356 out.go:296] Setting OutFile to fd 1976 ...
	I0601 11:28:03.717172    1356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:03.717172    1356 out.go:309] Setting ErrFile to fd 772...
	I0601 11:28:03.718153    1356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:03.729154    1356 out.go:303] Setting JSON to false
	I0601 11:28:03.729154    1356 mustload.go:65] Loading cluster: no-preload-20220601112334-9404
	I0601 11:28:03.729154    1356 config.go:178] Loaded profile config "no-preload-20220601112334-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:03.746155    1356 cli_runner.go:164] Run: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}
	W0601 11:28:06.367882    1356 cli_runner.go:211] docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:06.367882    1356 cli_runner.go:217] Completed: docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: (2.6216976s)
	I0601 11:28:06.372617    1356 out.go:177] 
	W0601 11:28:06.374643    1356 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404
	
	W0601 11:28:06.374643    1356 out.go:239] * 
	* 
	W0601 11:28:06.630901    1356 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:06.632914    1356 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p no-preload-20220601112334-9404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.188672s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0066762s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:10.849900    3808 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220601112334-9404: exit status 1 (1.1662929s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220601112334-9404 -n no-preload-20220601112334-9404: exit status 7 (3.0443378s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:15.069257    9352 status.go:247] status error: host: state: unknown state "no-preload-20220601112334-9404": docker container inspect no-preload-20220601112334-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220601112334-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220601112334-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (11.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220601112350-9404" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220601112350-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context embed-certs-20220601112350-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (246.0554ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220601112350-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220601112350-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1542635s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.9571824s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:11.607440    5580 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220601112350-9404 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p embed-certs-20220601112350-9404 "sudo crictl images -o json": exit status 80 (3.2914347s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p embed-certs-20220601112350-9404 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1888048s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.9631143s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:19.029912    7252 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220601112350-9404 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220601112350-9404 --alsologtostderr -v=1: exit status 80 (3.1453051s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:28:19.320920    9712 out.go:296] Setting OutFile to fd 1796 ...
	I0601 11:28:19.373920    9712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:19.373920    9712 out.go:309] Setting ErrFile to fd 1832...
	I0601 11:28:19.373920    9712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:19.386917    9712 out.go:303] Setting JSON to false
	I0601 11:28:19.386917    9712 mustload.go:65] Loading cluster: embed-certs-20220601112350-9404
	I0601 11:28:19.386917    9712 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:19.400919    9712 cli_runner.go:164] Run: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}
	W0601 11:28:21.926182    9712 cli_runner.go:211] docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:21.926182    9712 cli_runner.go:217] Completed: docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: (2.525234s)
	I0601 11:28:21.930212    9712 out.go:177] 
	W0601 11:28:21.932186    9712 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404
	
	W0601 11:28:21.932186    9712 out.go:239] * 
	* 
	W0601 11:28:22.184905    9712 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:28:22.187906    9712 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p embed-certs-20220601112350-9404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.1563536s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (3.0272313s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:26.385017    3880 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601112350-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220601112350-9404: exit status 1 (1.2134445s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220601112350-9404 -n embed-certs-20220601112350-9404: exit status 7 (2.9945127s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:28:30.600412    2932 status.go:247] status error: host: state: unknown state "embed-certs-20220601112350-9404": docker container inspect embed-certs-20220601112350-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220601112350-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220601112350-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (11.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 60 (1m17.5054187s)

                                                
                                                
-- stdout --
	* [auto-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node auto-20220601112023-9404 in cluster auto-20220601112023-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220601112023-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:28:31.551746    3448 out.go:296] Setting OutFile to fd 1700 ...
	I0601 11:28:31.610256    3448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:31.610256    3448 out.go:309] Setting ErrFile to fd 1936...
	I0601 11:28:31.610256    3448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:31.621523    3448 out.go:303] Setting JSON to false
	I0601 11:28:31.623992    3448 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14847,"bootTime":1654068064,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:28:31.624512    3448 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:28:31.630902    3448 out.go:177] * [auto-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:28:31.636651    3448 notify.go:193] Checking for updates...
	I0601 11:28:31.638903    3448 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:28:31.641029    3448 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:28:31.643485    3448 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:28:31.646064    3448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:28:31.649465    3448 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:31.649999    3448 config.go:178] Loaded profile config "embed-certs-20220601112350-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:31.650281    3448 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:31.650819    3448 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:31.650938    3448 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:28:34.415856    3448 docker.go:137] docker version: linux-20.10.14
	I0601 11:28:34.418536    3448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:28:36.492323    3448 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0737629s)
	I0601 11:28:36.492323    3448 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:28:35.4436138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:28:36.495323    3448 out.go:177] * Using the docker driver based on user configuration
	I0601 11:28:36.499323    3448 start.go:284] selected driver: docker
	I0601 11:28:36.499323    3448 start.go:806] validating driver "docker" against <nil>
	I0601 11:28:36.499323    3448 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:28:36.565617    3448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:28:38.674491    3448 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1087761s)
	I0601 11:28:38.674604    3448 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:28:37.6043483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:28:38.674604    3448 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:28:38.675329    3448 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:28:38.679508    3448 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:28:38.681041    3448 cni.go:95] Creating CNI manager for ""
	I0601 11:28:38.681041    3448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:28:38.681563    3448 start_flags.go:306] config:
	{Name:auto-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220601112023-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:28:38.682994    3448 out.go:177] * Starting control plane node auto-20220601112023-9404 in cluster auto-20220601112023-9404
	I0601 11:28:38.686319    3448 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:28:38.688868    3448 out.go:177] * Pulling base image ...
	I0601 11:28:38.690997    3448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:28:38.690997    3448 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:28:38.690997    3448 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:28:38.690997    3448 cache.go:57] Caching tarball of preloaded images
	I0601 11:28:38.691749    3448 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:28:38.691749    3448 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:28:38.692460    3448 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220601112023-9404\config.json ...
	I0601 11:28:38.692593    3448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220601112023-9404\config.json: {Name:mk840f1edc4745f631bad8292b3d82a78f5b3715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:28:39.786000    3448 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:28:39.786000    3448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:39.786000    3448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:39.786000    3448 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:28:39.786000    3448 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:28:39.786000    3448 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:28:39.786000    3448 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:28:39.786000    3448 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:28:39.786000    3448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:42.073351    3448 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:28:42.073417    3448 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:28:42.073444    3448 start.go:352] acquiring machines lock for auto-20220601112023-9404: {Name:mka5ce90a5cf5ef943bbeb67a50f4a2175c799f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:42.073444    3448 start.go:356] acquired machines lock for "auto-20220601112023-9404" in 0s
	I0601 11:28:42.073444    3448 start.go:91] Provisioning new machine with config: &{Name:auto-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:auto-20220601112023-9404 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:28:42.074193    3448 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:42.077581    3448 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:28:42.078073    3448 start.go:165] libmachine.API.Create for "auto-20220601112023-9404" (driver="docker")
	I0601 11:28:42.078125    3448 client.go:168] LocalClient.Create starting
	I0601 11:28:42.078125    3448 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:42.078697    3448 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:42.078697    3448 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:42.078831    3448 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:42.078831    3448 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:42.078831    3448 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:42.089064    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:43.179062    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:43.179062    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0893605s)
	I0601 11:28:43.187064    3448 network_create.go:272] running [docker network inspect auto-20220601112023-9404] to gather additional debugging logs...
	I0601 11:28:43.187064    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404
	W0601 11:28:44.258514    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:44.258514    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404: (1.0714376s)
	I0601 11:28:44.258514    3448 network_create.go:275] error running [docker network inspect auto-20220601112023-9404]: docker network inspect auto-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601112023-9404
	I0601 11:28:44.258514    3448 network_create.go:277] output of [docker network inspect auto-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601112023-9404
	
	** /stderr **
	I0601 11:28:44.264514    3448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:28:45.340352    3448 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0757512s)
	I0601 11:28:45.359871    3448 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006390] misses:0}
	I0601 11:28:45.360727    3448 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:28:45.360797    3448 network_create.go:115] attempt to create docker network auto-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:28:45.367089    3448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404
	W0601 11:28:46.455364    3448 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:46.455364    3448 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: (1.0882619s)
	E0601 11:28:46.455364    3448 network_create.go:104] error while trying to create docker network auto-20220601112023-9404 192.168.49.0/24: create docker network auto-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cb1685aa5f6cdf4da2ba33c17b4417106b59baf558dffbc9dd27751e092597d3 (br-cb1685aa5f6c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:28:46.455364    3448 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cb1685aa5f6cdf4da2ba33c17b4417106b59baf558dffbc9dd27751e092597d3 (br-cb1685aa5f6c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cb1685aa5f6cdf4da2ba33c17b4417106b59baf558dffbc9dd27751e092597d3 (br-cb1685aa5f6c): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:28:46.467365    3448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:28:47.602850    3448 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1354727s)
	I0601 11:28:47.621256    3448 cli_runner.go:164] Run: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:28:48.720441    3448 cli_runner.go:211] docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:28:48.720441    3448 cli_runner.go:217] Completed: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0990937s)
	I0601 11:28:48.720441    3448 client.go:171] LocalClient.Create took 6.642241s
	I0601 11:28:50.741673    3448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:28:50.748912    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:28:51.835701    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:51.835701    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.086776s)
	I0601 11:28:51.835701    3448 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:28:52.124202    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:28:53.180494    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:53.180494    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0562795s)
	W0601 11:28:53.180494    3448 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:28:53.180494    3448 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:28:53.190481    3448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:28:53.196503    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:28:54.328108    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:54.328108    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.1315925s)
	I0601 11:28:54.328108    3448 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:28:54.638717    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:28:55.750818    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:28:55.750818    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.1120888s)
	W0601 11:28:55.750818    3448 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:28:55.750818    3448 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:28:55.750818    3448 start.go:134] duration metric: createHost completed in 13.6764705s
	I0601 11:28:55.750818    3448 start.go:81] releasing machines lock for "auto-20220601112023-9404", held for 13.67722s
	W0601 11:28:55.750818    3448 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	I0601 11:28:55.765819    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:28:56.879567    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:28:56.879567    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.1137349s)
	I0601 11:28:56.879567    3448 delete.go:82] Unable to get host status for auto-20220601112023-9404, assuming it has already been deleted: state: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	W0601 11:28:56.879567    3448 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	I0601 11:28:56.879567    3448 start.go:614] Will try again in 5 seconds ...
	I0601 11:29:01.880590    3448 start.go:352] acquiring machines lock for auto-20220601112023-9404: {Name:mka5ce90a5cf5ef943bbeb67a50f4a2175c799f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:29:01.880590    3448 start.go:356] acquired machines lock for "auto-20220601112023-9404" in 0s
	I0601 11:29:01.881111    3448 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:29:01.881111    3448 fix.go:55] fixHost starting: 
	I0601 11:29:01.893237    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:02.987400    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:02.987400    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0941501s)
	I0601 11:29:02.987400    3448 fix.go:103] recreateIfNeeded on auto-20220601112023-9404: state= err=unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:02.987400    3448 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:29:02.996073    3448 out.go:177] * docker "auto-20220601112023-9404" container is missing, will recreate.
	I0601 11:29:02.998490    3448 delete.go:124] DEMOLISHING auto-20220601112023-9404 ...
	I0601 11:29:03.011623    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:04.085122    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:04.085122    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0734868s)
	W0601 11:29:04.085122    3448 stop.go:75] unable to get state: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:04.085122    3448 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:04.097120    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:05.172131    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:05.172221    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0747932s)
	I0601 11:29:05.172408    3448 delete.go:82] Unable to get host status for auto-20220601112023-9404, assuming it has already been deleted: state: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:05.179213    3448 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220601112023-9404
	W0601 11:29:06.241210    3448 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:06.241401    3448 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220601112023-9404: (1.0619856s)
	I0601 11:29:06.241461    3448 kic.go:356] could not find the container auto-20220601112023-9404 to remove it. will try anyways
	I0601 11:29:06.248828    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:07.328292    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:07.328292    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.079452s)
	W0601 11:29:07.328292    3448 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:07.337297    3448 cli_runner.go:164] Run: docker exec --privileged -t auto-20220601112023-9404 /bin/bash -c "sudo init 0"
	W0601 11:29:08.437502    3448 cli_runner.go:211] docker exec --privileged -t auto-20220601112023-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:29:08.437502    3448 cli_runner.go:217] Completed: docker exec --privileged -t auto-20220601112023-9404 /bin/bash -c "sudo init 0": (1.1001927s)
	I0601 11:29:08.437502    3448 oci.go:625] error shutdown auto-20220601112023-9404: docker exec --privileged -t auto-20220601112023-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:09.445415    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:10.550534    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:10.550597    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.1050561s)
	I0601 11:29:10.550597    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:10.550597    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:10.550597    3448 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:11.021554    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:12.177249    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:12.177249    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.1555416s)
	I0601 11:29:12.177402    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:12.177473    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:12.177473    3448 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:13.083071    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:14.161228    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:14.161228    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0781446s)
	I0601 11:29:14.161228    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:14.161228    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:14.161228    3448 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:14.806045    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:15.902559    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:15.902559    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0964613s)
	I0601 11:29:15.902559    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:15.902559    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:15.902559    3448 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:17.031349    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:18.168118    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:18.168187    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.1366106s)
	I0601 11:29:18.168187    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:18.168187    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:18.168187    3448 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:19.701305    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:20.815468    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:20.815468    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.1141508s)
	I0601 11:29:20.815468    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:20.815468    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:20.815468    3448 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:23.877877    3448 cli_runner.go:164] Run: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}
	W0601 11:29:24.975097    3448 cli_runner.go:211] docker container inspect auto-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:24.975097    3448 cli_runner.go:217] Completed: docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: (1.0972079s)
	I0601 11:29:24.975097    3448 oci.go:637] temporary error verifying shutdown: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:24.975097    3448 oci.go:639] temporary error: container auto-20220601112023-9404 status is  but expect it to be exited
	I0601 11:29:24.975097    3448 oci.go:88] couldn't shut down auto-20220601112023-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20220601112023-9404": docker container inspect auto-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	 
	I0601 11:29:24.982330    3448 cli_runner.go:164] Run: docker rm -f -v auto-20220601112023-9404
	I0601 11:29:26.096001    3448 cli_runner.go:217] Completed: docker rm -f -v auto-20220601112023-9404: (1.113632s)
	I0601 11:29:26.102403    3448 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220601112023-9404
	W0601 11:29:27.198860    3448 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:27.198860    3448 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220601112023-9404: (1.0963969s)
	I0601 11:29:27.207044    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:29:28.322601    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:29:28.322786    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1155448s)
	I0601 11:29:28.329119    3448 network_create.go:272] running [docker network inspect auto-20220601112023-9404] to gather additional debugging logs...
	I0601 11:29:28.329119    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404
	W0601 11:29:29.406417    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:29.406417    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404: (1.0772863s)
	I0601 11:29:29.406417    3448 network_create.go:275] error running [docker network inspect auto-20220601112023-9404]: docker network inspect auto-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601112023-9404
	I0601 11:29:29.406417    3448 network_create.go:277] output of [docker network inspect auto-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601112023-9404
	
	** /stderr **
	W0601 11:29:29.407750    3448 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:29:29.407750    3448 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:29:30.416950    3448 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:29:30.423713    3448 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:29:30.423713    3448 start.go:165] libmachine.API.Create for "auto-20220601112023-9404" (driver="docker")
	I0601 11:29:30.423713    3448 client.go:168] LocalClient.Create starting
	I0601 11:29:30.424752    3448 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:29:30.424891    3448 main.go:134] libmachine: Decoding PEM data...
	I0601 11:29:30.425082    3448 main.go:134] libmachine: Parsing certificate...
	I0601 11:29:30.425082    3448 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:29:30.425082    3448 main.go:134] libmachine: Decoding PEM data...
	I0601 11:29:30.425082    3448 main.go:134] libmachine: Parsing certificate...
	I0601 11:29:30.434913    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:29:31.515240    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:29:31.515317    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0800892s)
	I0601 11:29:31.524664    3448 network_create.go:272] running [docker network inspect auto-20220601112023-9404] to gather additional debugging logs...
	I0601 11:29:31.524664    3448 cli_runner.go:164] Run: docker network inspect auto-20220601112023-9404
	W0601 11:29:32.561522    3448 cli_runner.go:211] docker network inspect auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:32.561522    3448 cli_runner.go:217] Completed: docker network inspect auto-20220601112023-9404: (1.036847s)
	I0601 11:29:32.561522    3448 network_create.go:275] error running [docker network inspect auto-20220601112023-9404]: docker network inspect auto-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220601112023-9404
	I0601 11:29:32.561522    3448 network_create.go:277] output of [docker network inspect auto-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220601112023-9404
	
	** /stderr **
	I0601 11:29:32.570153    3448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:29:33.662716    3448 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0925519s)
	I0601 11:29:33.679094    3448 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006390] amended:false}} dirty:map[] misses:0}
	I0601 11:29:33.680130    3448 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:29:33.699559    3448 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006390] amended:true}} dirty:map[192.168.49.0:0xc000006390 192.168.58.0:0xc0000062f8] misses:0}
	I0601 11:29:33.699559    3448 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:29:33.699559    3448 network_create.go:115] attempt to create docker network auto-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:29:33.708378    3448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404
	W0601 11:29:34.750022    3448 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:34.750022    3448 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: (1.0416325s)
	E0601 11:29:34.750022    3448 network_create.go:104] error while trying to create docker network auto-20220601112023-9404 192.168.58.0/24: create docker network auto-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9380c194bf6fd0e34e2aec858cc86a59e00fea6fe782da9a44bbf2fb1859775c (br-9380c194bf6f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:29:34.750022    3448 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9380c194bf6fd0e34e2aec858cc86a59e00fea6fe782da9a44bbf2fb1859775c (br-9380c194bf6f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9380c194bf6fd0e34e2aec858cc86a59e00fea6fe782da9a44bbf2fb1859775c (br-9380c194bf6f): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:29:34.763057    3448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:29:35.851814    3448 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0887452s)
	I0601 11:29:35.859778    3448 cli_runner.go:164] Run: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:29:36.975301    3448 cli_runner.go:211] docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:29:36.975301    3448 cli_runner.go:217] Completed: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1155114s)
	I0601 11:29:36.975301    3448 client.go:171] LocalClient.Create took 6.551519s
	I0601 11:29:38.995692    3448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:39.002555    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:40.028410    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:40.028410    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.025598s)
	I0601 11:29:40.028410    3448 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:40.369746    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:41.454077    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:41.454077    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0840425s)
	W0601 11:29:41.454077    3448 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:29:41.454077    3448 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:41.467642    3448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:41.473632    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:42.558006    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:42.558006    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0843618s)
	I0601 11:29:42.558006    3448 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:42.799013    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:43.902157    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:43.902157    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.1031314s)
	W0601 11:29:43.902157    3448 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:29:43.902157    3448 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:43.902157    3448 start.go:134] duration metric: createHost completed in 13.4848478s
	I0601 11:29:43.911689    3448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:43.918005    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:45.019283    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:45.019527    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.1012661s)
	I0601 11:29:45.019711    3448 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:45.282774    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:46.364655    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:46.364655    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0817674s)
	W0601 11:29:46.364655    3448 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:29:46.364655    3448 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:46.375744    3448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:46.381662    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:47.450517    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:47.450517    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0688435s)
	I0601 11:29:47.450517    3448 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:47.668399    3448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404
	W0601 11:29:48.762615    3448 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404 returned with exit code 1
	I0601 11:29:48.762615    3448 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: (1.0932958s)
	W0601 11:29:48.762615    3448 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	
	W0601 11:29:48.762615    3448 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220601112023-9404
	I0601 11:29:48.762615    3448 fix.go:57] fixHost completed within 46.8809872s
	I0601 11:29:48.762615    3448 start.go:81] releasing machines lock for "auto-20220601112023-9404", held for 46.8815083s
	W0601 11:29:48.763582    3448 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p auto-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	I0601 11:29:48.769171    3448 out.go:177] 
	W0601 11:29:48.771476    3448 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220601112023-9404 container: docker volume create auto-20220601112023-9404 --label name.minikube.sigs.k8s.io=auto-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/auto-20220601112023-9404': mkdir /var/lib/docker/volumes/auto-20220601112023-9404: read-only file system
	
	W0601 11:29:48.771874    3448 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:29:48.772017    3448 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:29:48.795499    3448 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/auto/Start (77.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220601112030-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220601112030-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 60 (1m17.6447821s)

                                                
                                                
-- stdout --
	* [kindnet-20220601112030-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220601112030-9404 in cluster kindnet-20220601112030-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20220601112030-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:28:47.052684    4872 out.go:296] Setting OutFile to fd 2040 ...
	I0601 11:28:47.112565    4872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:47.112565    4872 out.go:309] Setting ErrFile to fd 1560...
	I0601 11:28:47.112565    4872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:28:47.123565    4872 out.go:303] Setting JSON to false
	I0601 11:28:47.128196    4872 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14862,"bootTime":1654068065,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:28:47.129196    4872 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:28:47.134280    4872 out.go:177] * [kindnet-20220601112030-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:28:47.138408    4872 notify.go:193] Checking for updates...
	I0601 11:28:47.140754    4872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:28:47.143282    4872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:28:47.145668    4872 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:28:47.147688    4872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:28:47.150755    4872 config.go:178] Loaded profile config "auto-20220601112023-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:47.150755    4872 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:47.150755    4872 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:47.152047    4872 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:28:47.152047    4872 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:28:49.893672    4872 docker.go:137] docker version: linux-20.10.14
	I0601 11:28:49.903499    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:28:52.006893    4872 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.103241s)
	I0601 11:28:52.007455    4872 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:28:50.9309731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:28:52.010915    4872 out.go:177] * Using the docker driver based on user configuration
	I0601 11:28:52.013344    4872 start.go:284] selected driver: docker
	I0601 11:28:52.013344    4872 start.go:806] validating driver "docker" against <nil>
	I0601 11:28:52.013344    4872 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:28:52.082835    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:28:54.217078    4872 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1341556s)
	I0601 11:28:54.217078    4872 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:28:53.1495308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:28:54.217078    4872 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:28:54.217790    4872 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:28:54.220612    4872 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:28:54.222600    4872 cni.go:95] Creating CNI manager for "kindnet"
	I0601 11:28:54.222600    4872 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:28:54.222600    4872 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0601 11:28:54.222600    4872 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0601 11:28:54.222600    4872 start_flags.go:306] config:
	{Name:kindnet-20220601112030-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601112030-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:28:54.226816    4872 out.go:177] * Starting control plane node kindnet-20220601112030-9404 in cluster kindnet-20220601112030-9404
	I0601 11:28:54.228735    4872 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:28:54.231092    4872 out.go:177] * Pulling base image ...
	I0601 11:28:54.234116    4872 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:28:54.234116    4872 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:28:54.234281    4872 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:28:54.234335    4872 cache.go:57] Caching tarball of preloaded images
	I0601 11:28:54.234812    4872 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:28:54.234882    4872 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:28:54.234882    4872 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220601112030-9404\config.json ...
	I0601 11:28:54.234882    4872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220601112030-9404\config.json: {Name:mka052517905dddbc9f9e6908e490e82c4433d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:28:55.338294    4872 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:28:55.338294    4872 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:55.338294    4872 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:55.338294    4872 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:28:55.338294    4872 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:28:55.338294    4872 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:28:55.338294    4872 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:28:55.338294    4872 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:28:55.338294    4872 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:28:57.717893    4872 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:28:57.718427    4872 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:28:57.718566    4872 start.go:352] acquiring machines lock for kindnet-20220601112030-9404: {Name:mk040205c4f76e02e76b63ea1fe239edc03f234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:28:57.718778    4872 start.go:356] acquired machines lock for "kindnet-20220601112030-9404" in 125.1µs
	I0601 11:28:57.718778    4872 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220601112030-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220601112030-9404 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:28:57.718778    4872 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:28:57.722717    4872 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:28:57.722717    4872 start.go:165] libmachine.API.Create for "kindnet-20220601112030-9404" (driver="docker")
	I0601 11:28:57.723280    4872 client.go:168] LocalClient.Create starting
	I0601 11:28:57.723875    4872 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:28:57.724128    4872 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:57.724213    4872 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:57.724380    4872 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:28:57.724380    4872 main.go:134] libmachine: Decoding PEM data...
	I0601 11:28:57.724380    4872 main.go:134] libmachine: Parsing certificate...
	I0601 11:28:57.732890    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:28:58.819328    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:28:58.819494    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0863592s)
	I0601 11:28:58.834504    4872 network_create.go:272] running [docker network inspect kindnet-20220601112030-9404] to gather additional debugging logs...
	I0601 11:28:58.834504    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404
	W0601 11:28:59.890144    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:28:59.890144    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404: (1.0556282s)
	I0601 11:28:59.890144    4872 network_create.go:275] error running [docker network inspect kindnet-20220601112030-9404]: docker network inspect kindnet-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220601112030-9404
	I0601 11:28:59.890144    4872 network_create.go:277] output of [docker network inspect kindnet-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220601112030-9404
	
	** /stderr **
	I0601 11:28:59.898611    4872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:29:00.953889    4872 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0552659s)
	I0601 11:29:00.973741    4872 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006728] misses:0}
	I0601 11:29:00.973741    4872 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:29:00.974750    4872 network_create.go:115] attempt to create docker network kindnet-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:29:00.980449    4872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404
	W0601 11:29:02.042177    4872 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:02.042177    4872 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: (1.0615837s)
	E0601 11:29:02.042177    4872 network_create.go:104] error while trying to create docker network kindnet-20220601112030-9404 192.168.49.0/24: create docker network kindnet-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5477cd3960b0647c7a55b5d69d477d77372244b8c3024a081e8c3cfed677ae54 (br-5477cd3960b0): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:29:02.042177    4872 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5477cd3960b0647c7a55b5d69d477d77372244b8c3024a081e8c3cfed677ae54 (br-5477cd3960b0): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5477cd3960b0647c7a55b5d69d477d77372244b8c3024a081e8c3cfed677ae54 (br-5477cd3960b0): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:29:02.055293    4872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:29:03.140848    4872 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0845425s)
	I0601 11:29:03.149584    4872 cli_runner.go:164] Run: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:29:04.275330    4872 cli_runner.go:211] docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:29:04.275330    4872 cli_runner.go:217] Completed: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: (1.125202s)
	I0601 11:29:04.275330    4872 client.go:171] LocalClient.Create took 6.5519444s
	I0601 11:29:06.302416    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:06.308430    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:07.391300    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:07.391300    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0828578s)
	I0601 11:29:07.391300    4872 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:07.684738    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:08.784535    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:08.784592    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0995879s)
	W0601 11:29:08.784592    4872 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:29:08.784592    4872 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:08.794882    4872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:08.801509    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:09.910484    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:09.910484    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.1087131s)
	I0601 11:29:09.910484    4872 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:10.219347    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:11.328434    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:11.328434    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.1085508s)
	W0601 11:29:11.328434    4872 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:29:11.328434    4872 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:11.328434    4872 start.go:134] duration metric: createHost completed in 13.6095017s
	I0601 11:29:11.328434    4872 start.go:81] releasing machines lock for "kindnet-20220601112030-9404", held for 13.6095017s
	W0601 11:29:11.328434    4872 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	I0601 11:29:11.343347    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:12.442107    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:12.442107    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0986171s)
	I0601 11:29:12.442107    4872 delete.go:82] Unable to get host status for kindnet-20220601112030-9404, assuming it has already been deleted: state: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	W0601 11:29:12.442107    4872 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	I0601 11:29:12.442623    4872 start.go:614] Will try again in 5 seconds ...
	I0601 11:29:17.449263    4872 start.go:352] acquiring machines lock for kindnet-20220601112030-9404: {Name:mk040205c4f76e02e76b63ea1fe239edc03f234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:29:17.449263    4872 start.go:356] acquired machines lock for "kindnet-20220601112030-9404" in 0s
	I0601 11:29:17.449263    4872 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:29:17.449263    4872 fix.go:55] fixHost starting: 
	I0601 11:29:17.463714    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:18.594204    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:18.594204    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.1304775s)
	I0601 11:29:18.594204    4872 fix.go:103] recreateIfNeeded on kindnet-20220601112030-9404: state= err=unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:18.594204    4872 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:29:18.599192    4872 out.go:177] * docker "kindnet-20220601112030-9404" container is missing, will recreate.
	I0601 11:29:18.601192    4872 delete.go:124] DEMOLISHING kindnet-20220601112030-9404 ...
	I0601 11:29:18.614196    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:19.676991    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:19.676991    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0627828s)
	W0601 11:29:19.676991    4872 stop.go:75] unable to get state: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:19.676991    4872 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:19.691103    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:20.799032    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:20.799032    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.1079167s)
	I0601 11:29:20.799032    4872 delete.go:82] Unable to get host status for kindnet-20220601112030-9404, assuming it has already been deleted: state: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:20.805036    4872 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220601112030-9404
	W0601 11:29:21.895149    4872 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:21.895149    4872 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220601112030-9404: (1.0901004s)
	I0601 11:29:21.895149    4872 kic.go:356] could not find the container kindnet-20220601112030-9404 to remove it. will try anyways
	I0601 11:29:21.902101    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:22.973986    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:22.973986    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0714977s)
	W0601 11:29:22.973986    4872 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:22.981321    4872 cli_runner.go:164] Run: docker exec --privileged -t kindnet-20220601112030-9404 /bin/bash -c "sudo init 0"
	W0601 11:29:24.121973    4872 cli_runner.go:211] docker exec --privileged -t kindnet-20220601112030-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:29:24.121973    4872 cli_runner.go:217] Completed: docker exec --privileged -t kindnet-20220601112030-9404 /bin/bash -c "sudo init 0": (1.1406405s)
	I0601 11:29:24.121973    4872 oci.go:625] error shutdown kindnet-20220601112030-9404: docker exec --privileged -t kindnet-20220601112030-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:25.136336    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:26.237613    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:26.237613    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.1012651s)
	I0601 11:29:26.237613    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:26.237613    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:26.237613    4872 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:26.719102    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:27.810036    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:27.810036    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0907765s)
	I0601 11:29:27.810036    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:27.810036    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:27.810036    4872 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:28.710308    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:29.814955    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:29.814955    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.1046356s)
	I0601 11:29:29.814955    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:29.814955    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:29.814955    4872 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:30.469644    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:31.530334    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:31.530334    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0606786s)
	I0601 11:29:31.530334    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:31.530334    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:31.530334    4872 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:32.650525    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:33.756371    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:33.756371    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.1058338s)
	I0601 11:29:33.756371    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:33.756371    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:33.756371    4872 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:35.281774    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:36.370702    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:36.370702    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0889168s)
	I0601 11:29:36.370702    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:36.370702    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:36.370702    4872 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:39.419828    4872 cli_runner.go:164] Run: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}
	W0601 11:29:40.468178    4872 cli_runner.go:211] docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:40.468178    4872 cli_runner.go:217] Completed: docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: (1.0483375s)
	I0601 11:29:40.468178    4872 oci.go:637] temporary error verifying shutdown: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:40.468178    4872 oci.go:639] temporary error: container kindnet-20220601112030-9404 status is  but expect it to be exited
	I0601 11:29:40.468178    4872 oci.go:88] couldn't shut down kindnet-20220601112030-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20220601112030-9404": docker container inspect kindnet-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	 
	I0601 11:29:40.474780    4872 cli_runner.go:164] Run: docker rm -f -v kindnet-20220601112030-9404
	I0601 11:29:41.517622    4872 cli_runner.go:217] Completed: docker rm -f -v kindnet-20220601112030-9404: (1.0428298s)
	I0601 11:29:41.524587    4872 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220601112030-9404
	W0601 11:29:42.604780    4872 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:42.604780    4872 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kindnet-20220601112030-9404: (1.0801811s)
	I0601 11:29:42.611700    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:29:43.701005    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:29:43.701005    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0892934s)
	I0601 11:29:43.709561    4872 network_create.go:272] running [docker network inspect kindnet-20220601112030-9404] to gather additional debugging logs...
	I0601 11:29:43.709732    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404
	W0601 11:29:44.784272    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:44.784272    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404: (1.0745278s)
	I0601 11:29:44.784272    4872 network_create.go:275] error running [docker network inspect kindnet-20220601112030-9404]: docker network inspect kindnet-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220601112030-9404
	I0601 11:29:44.784272    4872 network_create.go:277] output of [docker network inspect kindnet-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220601112030-9404
	
	** /stderr **
	W0601 11:29:44.785273    4872 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:29:44.785273    4872 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:29:45.785474    4872 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:29:45.789414    4872 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:29:45.789726    4872 start.go:165] libmachine.API.Create for "kindnet-20220601112030-9404" (driver="docker")
	I0601 11:29:45.789776    4872 client.go:168] LocalClient.Create starting
	I0601 11:29:45.790305    4872 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:29:45.790354    4872 main.go:134] libmachine: Decoding PEM data...
	I0601 11:29:45.790354    4872 main.go:134] libmachine: Parsing certificate...
	I0601 11:29:45.790354    4872 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:29:45.790877    4872 main.go:134] libmachine: Decoding PEM data...
	I0601 11:29:45.790877    4872 main.go:134] libmachine: Parsing certificate...
	I0601 11:29:45.799503    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:29:46.899784    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:29:46.899784    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1002683s)
	I0601 11:29:46.907101    4872 network_create.go:272] running [docker network inspect kindnet-20220601112030-9404] to gather additional debugging logs...
	I0601 11:29:46.907101    4872 cli_runner.go:164] Run: docker network inspect kindnet-20220601112030-9404
	W0601 11:29:48.002311    4872 cli_runner.go:211] docker network inspect kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:48.002311    4872 cli_runner.go:217] Completed: docker network inspect kindnet-20220601112030-9404: (1.0951974s)
	I0601 11:29:48.002481    4872 network_create.go:275] error running [docker network inspect kindnet-20220601112030-9404]: docker network inspect kindnet-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220601112030-9404
	I0601 11:29:48.002547    4872 network_create.go:277] output of [docker network inspect kindnet-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220601112030-9404
	
	** /stderr **
	I0601 11:29:48.010608    4872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:29:49.065156    4872 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0545358s)
	I0601 11:29:49.082158    4872 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006728] amended:false}} dirty:map[] misses:0}
	I0601 11:29:49.082158    4872 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:29:49.099168    4872 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006728] amended:true}} dirty:map[192.168.49.0:0xc000006728 192.168.58.0:0xc000116430] misses:0}
	I0601 11:29:49.099168    4872 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:29:49.099168    4872 network_create.go:115] attempt to create docker network kindnet-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:29:49.107161    4872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404
	W0601 11:29:50.202974    4872 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:50.203042    4872 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: (1.0955337s)
	E0601 11:29:50.203042    4872 network_create.go:104] error while trying to create docker network kindnet-20220601112030-9404 192.168.58.0/24: create docker network kindnet-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e65eb8a29c1cd9b8675d53b4a59130f4ad6fb72dadfc9806dc94042e8168bb1 (br-9e65eb8a29c1): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:29:50.203514    4872 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e65eb8a29c1cd9b8675d53b4a59130f4ad6fb72dadfc9806dc94042e8168bb1 (br-9e65eb8a29c1): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e65eb8a29c1cd9b8675d53b4a59130f4ad6fb72dadfc9806dc94042e8168bb1 (br-9e65eb8a29c1): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:29:50.219529    4872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:29:51.341168    4872 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1214112s)
	I0601 11:29:51.348751    4872 cli_runner.go:164] Run: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:29:52.447753    4872 cli_runner.go:211] docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:29:52.447818    4872 cli_runner.go:217] Completed: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0989406s)
	I0601 11:29:52.447818    4872 client.go:171] LocalClient.Create took 6.6579133s
	I0601 11:29:54.469997    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:54.476145    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:55.591823    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:55.591881    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.1155156s)
	I0601 11:29:55.591881    4872 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:55.934543    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:57.025266    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:57.025292    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0906403s)
	W0601 11:29:57.025292    4872 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:29:57.025292    4872 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:57.036318    4872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:29:57.046187    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:58.199593    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:58.199593    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.1532065s)
	I0601 11:29:58.199593    4872 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:58.441935    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:29:59.530547    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:29:59.530547    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0885997s)
	W0601 11:29:59.530547    4872 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:29:59.530547    4872 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:29:59.530547    4872 start.go:134] duration metric: createHost completed in 13.7449191s
	I0601 11:29:59.541279    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:29:59.548173    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:30:00.655436    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:30:00.655436    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.1072504s)
	I0601 11:30:00.655436    4872 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:30:00.914986    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:30:02.006405    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:30:02.006405    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0914061s)
	W0601 11:30:02.006405    4872 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:30:02.006405    4872 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:30:02.015352    4872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:02.022388    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:30:03.107480    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:30:03.107690    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0849606s)
	I0601 11:30:03.107690    4872 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:30:03.321817    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404
	W0601 11:30:04.402603    4872 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404 returned with exit code 1
	I0601 11:30:04.402655    4872 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: (1.0805071s)
	W0601 11:30:04.402655    4872 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	
	W0601 11:30:04.402655    4872 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220601112030-9404
	I0601 11:30:04.402655    4872 fix.go:57] fixHost completed within 46.952875s
	I0601 11:30:04.402655    4872 start.go:81] releasing machines lock for "kindnet-20220601112030-9404", held for 46.952875s
	W0601 11:30:04.403357    4872 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-20220601112030-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kindnet-20220601112030-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	I0601 11:30:04.414019    4872 out.go:177] 
	W0601 11:30:04.416449    4872 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220601112030-9404 container: docker volume create kindnet-20220601112030-9404 --label name.minikube.sigs.k8s.io=kindnet-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220601112030-9404': mkdir /var/lib/docker/volumes/kindnet-20220601112030-9404: read-only file system
	
	W0601 11:30:04.417094    4872 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:30:04.417094    4872 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:30:04.421665    4872 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kindnet/Start (77.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601112749-9404 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601112749-9404 create -f testdata\busybox.yaml: exit status 1 (251.5483ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220601112749-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context default-k8s-different-port-20220601112749-9404 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.2011767s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.9501978s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:16.321612    7036 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1352054s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0211891s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:20.470079    7680 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (27.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220601112753-9404 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p newest-cni-20220601112753-9404 --alsologtostderr -v=3: exit status 82 (22.8790896s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20220601112753-9404"  ...
	* Stopping node "newest-cni-20220601112753-9404"  ...
	* Stopping node "newest-cni-20220601112753-9404"  ...
	* Stopping node "newest-cni-20220601112753-9404"  ...
	* Stopping node "newest-cni-20220601112753-9404"  ...
	* Stopping node "newest-cni-20220601112753-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:29:19.173683    6004 out.go:296] Setting OutFile to fd 1920 ...
	I0601 11:29:19.232542    6004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:19.232542    6004 out.go:309] Setting ErrFile to fd 664...
	I0601 11:29:19.232542    6004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:19.250786    6004 out.go:303] Setting JSON to false
	I0601 11:29:19.251609    6004 daemonize_windows.go:44] trying to kill existing schedule stop for profile newest-cni-20220601112753-9404...
	I0601 11:29:19.263700    6004 ssh_runner.go:195] Run: systemctl --version
	I0601 11:29:19.270486    6004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:21.879136    6004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:21.879136    6004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (2.608621s)
	I0601 11:29:21.889143    6004 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0601 11:29:21.896124    6004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:22.958047    6004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:22.958076    6004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0617261s)
	I0601 11:29:22.958076    6004 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:23.334324    6004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:29:24.436805    6004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:29:24.436805    6004 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1023941s)
	I0601 11:29:24.436805    6004 openrc.go:165] stop output: 
	E0601 11:29:24.436805    6004 daemonize_windows.go:38] error terminating scheduled stop for profile newest-cni-20220601112753-9404: stopping schedule-stop service for profile newest-cni-20220601112753-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:24.436805    6004 mustload.go:65] Loading cluster: newest-cni-20220601112753-9404
	I0601 11:29:24.437796    6004 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:29:24.437796    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:24.442792    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:24.456853    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:25.570284    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:25.570284    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1134194s)
	W0601 11:29:25.570284    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:25.570284    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:25.570284    6004 retry.go:31] will retry after 937.714187ms: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:26.523500    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:26.534237    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:26.548080    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:27.638235    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:27.638235    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0899443s)
	W0601 11:29:27.638235    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:27.638235    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:27.638235    6004 retry.go:31] will retry after 1.386956246s: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:29.032451    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:29.039255    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:29.055299    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:30.179931    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:30.179931    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1246198s)
	W0601 11:29:30.179931    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:30.179931    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:30.179931    6004 retry.go:31] will retry after 2.670351914s: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:32.861660    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:32.867448    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:32.883376    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:34.008268    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:34.008268    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1248806s)
	W0601 11:29:34.008268    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:34.008268    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:34.008268    6004 retry.go:31] will retry after 1.909024939s: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:35.930152    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:35.934839    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:35.958455    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:37.067447    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:37.067514    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1085963s)
	W0601 11:29:37.067584    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:37.067615    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:37.067615    6004 retry.go:31] will retry after 3.323628727s: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:40.407093    6004 stop.go:39] StopHost: newest-cni-20220601112753-9404
	I0601 11:29:40.413554    6004 out.go:177] * Stopping node "newest-cni-20220601112753-9404"  ...
	I0601 11:29:40.432079    6004 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:29:41.469635    6004 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:41.469635    6004 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0375439s)
	W0601 11:29:41.469635    6004 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	W0601 11:29:41.469635    6004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:29:41.473632    6004 out.go:177] 
	W0601 11:29:41.475633    6004 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220601112753-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220601112753-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:29:41.475633    6004 out.go:239] * 
	* 
	W0601 11:29:41.767989    6004 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:29:41.770991    6004 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p newest-cni-20220601112753-9404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1952406s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (2.9336653s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:45.927199    5556 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (27.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220601112749-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220601112749-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9469766s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/metrics-server -n kube-system: exit status 1 (231.3432ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220601112749-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1570554s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.9766064s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:27.794036    8164 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (26.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=3: exit status 82 (22.7107943s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	* Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:29:28.058013    9796 out.go:296] Setting OutFile to fd 1776 ...
	I0601 11:29:28.125302    9796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:28.125302    9796 out.go:309] Setting ErrFile to fd 1780...
	I0601 11:29:28.125302    9796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:28.135924    9796 out.go:303] Setting JSON to false
	I0601 11:29:28.136940    9796 daemonize_windows.go:44] trying to kill existing schedule stop for profile default-k8s-different-port-20220601112749-9404...
	I0601 11:29:28.154151    9796 ssh_runner.go:195] Run: systemctl --version
	I0601 11:29:28.162655    9796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:30.773492    9796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:30.773492    9796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (2.6105007s)
	I0601 11:29:30.787669    9796 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0601 11:29:30.794663    9796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:31.879975    9796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:31.880006    9796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0852484s)
	I0601 11:29:31.880371    9796 retry.go:31] will retry after 360.127272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:32.253287    9796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:29:33.351278    9796 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:29:33.351278    9796 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0970208s)
	I0601 11:29:33.351278    9796 openrc.go:165] stop output: 
	E0601 11:29:33.351278    9796 daemonize_windows.go:38] error terminating scheduled stop for profile default-k8s-different-port-20220601112749-9404: stopping schedule-stop service for profile default-k8s-different-port-20220601112749-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:33.351278    9796 mustload.go:65] Loading cluster: default-k8s-different-port-20220601112749-9404
	I0601 11:29:33.352826    9796 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:29:33.352958    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:33.357815    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:33.371712    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:34.451259    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:34.451471    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0793759s)
	W0601 11:29:34.451471    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:34.452336    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:34.452361    9796 retry.go:31] will retry after 937.714187ms: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:35.399148    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:35.403871    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:35.417650    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:36.541841    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:36.541841    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.124024s)
	W0601 11:29:36.541841    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:36.541841    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:36.541841    9796 retry.go:31] will retry after 1.386956246s: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:37.939721    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:37.939869    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:37.961214    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:38.967629    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:38.967629    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0061164s)
	W0601 11:29:38.967629    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:38.967629    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:38.967629    9796 retry.go:31] will retry after 2.670351914s: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:41.642823    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:41.647091    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:41.665002    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:42.759582    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:42.759582    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.094568s)
	W0601 11:29:42.759582    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:42.759582    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:42.759582    9796 retry.go:31] will retry after 1.909024939s: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:44.676922    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:44.683918    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:44.700407    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:45.816521    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:45.816521    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1161019s)
	W0601 11:29:45.816521    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:45.816521    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:45.816521    9796 retry.go:31] will retry after 3.323628727s: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:49.144622    9796 stop.go:39] StopHost: default-k8s-different-port-20220601112749-9404
	I0601 11:29:49.154062    9796 out.go:177] * Stopping node "default-k8s-different-port-20220601112749-9404"  ...
	I0601 11:29:49.169077    9796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:29:50.218525    9796 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:29:50.218525    9796 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0494365s)
	W0601 11:29:50.218525    9796 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	W0601 11:29:50.218525    9796 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:29:50.221532    9796 out.go:177] 
	W0601 11:29:50.223525    9796 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220601112749-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220601112749-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:29:50.223525    9796 out.go:239] * 
	* 
	W0601 11:29:50.489235    9796 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_50.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:29:50.493776    9796 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1960699s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0284595s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:54.741443    3948 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (26.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (2.9866356s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:48.922869    9260 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220601112753-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220601112753-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0308111s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1776244s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (2.9854454s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:56.117094    7536 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (10.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.9592705s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:29:57.717486    8712 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220601112749-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220601112749-9404 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9850062s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.14636s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.9976445s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:30:04.838567    7292 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (10.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (122.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m57.9202986s)

                                                
                                                
-- stdout --
	* [newest-cni-20220601112753-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20220601112753-9404 in cluster newest-cni-20220601112753-9404
	* Pulling base image ...
	* docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:29:56.372356     720 out.go:296] Setting OutFile to fd 2000 ...
	I0601 11:29:56.453164     720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:56.453164     720 out.go:309] Setting ErrFile to fd 1944...
	I0601 11:29:56.453164     720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:29:56.465220     720 out.go:303] Setting JSON to false
	I0601 11:29:56.467282     720 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14931,"bootTime":1654068065,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:29:56.467957     720 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:29:56.471878     720 out.go:177] * [newest-cni-20220601112753-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:29:56.477867     720 notify.go:193] Checking for updates...
	I0601 11:29:56.479906     720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:29:56.482241     720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:29:56.484789     720 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:29:56.486803     720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:29:56.489856     720 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:29:56.490859     720 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:29:59.220301     720 docker.go:137] docker version: linux-20.10.14
	I0601 11:29:59.227269     720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:01.376249     720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1489554s)
	I0601 11:30:01.377001     720 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:00.3075126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:01.377001     720 out.go:177] * Using the docker driver based on existing profile
	I0601 11:30:01.377001     720 start.go:284] selected driver: docker
	I0601 11:30:01.377001     720 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601112753-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601112753-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false ku
belet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:01.377001     720 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:30:01.516801     720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:03.661426     720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1445248s)
	I0601 11:30:03.661426     720 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:02.5924896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:03.662183     720 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 11:30:03.662240     720 cni.go:95] Creating CNI manager for ""
	I0601 11:30:03.662291     720 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:30:03.662291     720 start_flags.go:306] config:
	{Name:newest-cni-20220601112753-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601112753-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:03.665868     720 out.go:177] * Starting control plane node newest-cni-20220601112753-9404 in cluster newest-cni-20220601112753-9404
	I0601 11:30:03.685031     720 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:30:03.687303     720 out.go:177] * Pulling base image ...
	I0601 11:30:03.690411     720 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:30:03.690411     720 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:30:03.690938     720 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:30:03.690938     720 cache.go:57] Caching tarball of preloaded images
	I0601 11:30:03.691235     720 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:30:03.691235     720 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:30:03.691755     720 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220601112753-9404\config.json ...
	I0601 11:30:04.806525     720 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:30:04.806525     720 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:04.806525     720 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:04.806525     720 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:30:04.806525     720 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:30:04.806525     720 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:30:04.806525     720 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:30:04.806525     720 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:30:04.806525     720 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:07.114884     720 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:30:07.114884     720 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:30:07.114884     720 start.go:352] acquiring machines lock for newest-cni-20220601112753-9404: {Name:mka9c5833b483068b0a73f6342d879a5ebe04326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:07.114884     720 start.go:356] acquired machines lock for "newest-cni-20220601112753-9404" in 0s
	I0601 11:30:07.115502     720 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:30:07.115560     720 fix.go:55] fixHost starting: 
	I0601 11:30:07.131170     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:08.226487     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:08.226487     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0951746s)
	I0601 11:30:08.226729     720 fix.go:103] recreateIfNeeded on newest-cni-20220601112753-9404: state= err=unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:08.226807     720 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:30:08.235818     720 out.go:177] * docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	I0601 11:30:08.239222     720 delete.go:124] DEMOLISHING newest-cni-20220601112753-9404 ...
	I0601 11:30:08.257361     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:09.342230     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:09.342230     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0848566s)
	W0601 11:30:09.342230     720 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:09.342230     720 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:09.356236     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:10.504273     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:10.504273     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1480251s)
	I0601 11:30:10.504273     720 delete.go:82] Unable to get host status for newest-cni-20220601112753-9404, assuming it has already been deleted: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:10.510266     720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:30:11.613852     720 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:11.613852     720 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.1034882s)
	I0601 11:30:11.613852     720 kic.go:356] could not find the container newest-cni-20220601112753-9404 to remove it. will try anyways
	I0601 11:30:11.620794     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:12.697702     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:12.697702     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0768955s)
	W0601 11:30:12.697702     720 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:12.704620     720 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0"
	W0601 11:30:13.798578     720 cli_runner.go:211] docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:30:13.798578     720 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": (1.0939452s)
	I0601 11:30:13.798578     720 oci.go:625] error shutdown newest-cni-20220601112753-9404: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:14.814197     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:15.905487     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:15.905487     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0912783s)
	I0601 11:30:15.905487     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:15.905487     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:15.905487     720 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:16.473927     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:17.578732     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:17.578732     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.104684s)
	I0601 11:30:17.578732     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:17.578732     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:17.578732     720 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:18.677531     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:19.805776     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:19.805776     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1282321s)
	I0601 11:30:19.805776     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:19.805776     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:19.805776     720 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:21.132033     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:22.218981     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:22.218981     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0869356s)
	I0601 11:30:22.218981     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:22.218981     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:22.218981     720 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:23.810239     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:24.917059     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:24.917059     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.106807s)
	I0601 11:30:24.917059     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:24.917059     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:24.917059     720 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:27.269718     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:28.361762     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:28.361762     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0920314s)
	I0601 11:30:28.361762     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:28.361762     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:28.361762     720 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:32.884723     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:30:34.004487     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:34.004487     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1197509s)
	I0601 11:30:34.004487     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:34.004487     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:30:34.004487     720 oci.go:88] couldn't shut down newest-cni-20220601112753-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	 
	I0601 11:30:34.013390     720 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220601112753-9404
	I0601 11:30:35.099333     720 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220601112753-9404: (1.0857319s)
	I0601 11:30:35.105584     720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:30:36.159560     720 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:36.159633     720 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.0538546s)
	I0601 11:30:36.166867     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:37.291643     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:37.291799     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1245685s)
	I0601 11:30:37.298173     720 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:30:37.299159     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:30:38.357376     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:38.357376     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.058205s)
	I0601 11:30:38.357464     720 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:30:38.357501     720 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	W0601 11:30:38.358091     720 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:30:38.358091     720 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:30:39.372262     720 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:30:39.376838     720 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:30:39.377511     720 start.go:165] libmachine.API.Create for "newest-cni-20220601112753-9404" (driver="docker")
	I0601 11:30:39.377511     720 client.go:168] LocalClient.Create starting
	I0601 11:30:39.378046     720 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:30:39.378158     720 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:39.378158     720 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:39.378158     720 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:30:39.378158     720 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:39.378752     720 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:39.386200     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:40.437084     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:40.437084     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0508723s)
	I0601 11:30:40.444188     720 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:30:40.444188     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:30:41.543333     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:41.543333     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.0991331s)
	I0601 11:30:41.543333     720 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:30:41.543333     720 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	I0601 11:30:41.549312     720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:30:42.632499     720 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0831747s)
	I0601 11:30:42.651461     720 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000062c8] misses:0}
	I0601 11:30:42.651461     720 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:30:42.651461     720 network_create.go:115] attempt to create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:30:42.658995     720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404
	W0601 11:30:43.698079     720 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:43.698079     720 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: (1.0390728s)
	E0601 11:30:43.698079     720 network_create.go:104] error while trying to create docker network newest-cni-20220601112753-9404 192.168.49.0/24: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 556e00dccc5be922a3b7ee9b0df97e91eaf2f6818f4ae854b2393ead23920471 (br-556e00dccc5b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:30:43.698079     720 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 556e00dccc5be922a3b7ee9b0df97e91eaf2f6818f4ae854b2393ead23920471 (br-556e00dccc5b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 556e00dccc5be922a3b7ee9b0df97e91eaf2f6818f4ae854b2393ead23920471 (br-556e00dccc5b): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:30:43.711078     720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:30:44.828726     720 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.117636s)
	I0601 11:30:44.834734     720 cli_runner.go:164] Run: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:30:45.890714     720 cli_runner.go:211] docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:30:45.890714     720 cli_runner.go:217] Completed: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0556685s)
	I0601 11:30:45.890714     720 client.go:171] LocalClient.Create took 6.5131303s
	I0601 11:30:47.912648     720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:30:47.919882     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:49.050422     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:49.050422     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1304984s)
	I0601 11:30:49.050422     720 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:49.232948     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:50.309642     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:50.309716     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0765705s)
	W0601 11:30:50.309775     720 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:30:50.309775     720 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:50.321628     720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:50.327248     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:51.413043     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:51.413043     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0857831s)
	I0601 11:30:51.413043     720 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:51.627683     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:52.751817     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:52.751887     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1237856s)
	W0601 11:30:52.751887     720 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:30:52.751887     720 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:52.751887     720 start.go:134] duration metric: createHost completed in 13.3792559s
	I0601 11:30:52.765868     720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:30:52.774151     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:53.859796     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:53.859796     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0856328s)
	I0601 11:30:53.859796     720 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:54.198703     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:55.253765     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:55.253765     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0550496s)
	W0601 11:30:55.253765     720 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:30:55.253765     720 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:55.262771     720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:55.269765     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:56.355736     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:56.355794     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0858285s)
	I0601 11:30:56.355794     720 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:56.591205     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:30:57.692608     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:30:57.692608     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1013912s)
	W0601 11:30:57.692608     720 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:30:57.692608     720 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:30:57.692608     720 fix.go:57] fixHost completed within 50.5764818s
	I0601 11:30:57.692608     720 start.go:81] releasing machines lock for "newest-cni-20220601112753-9404", held for 50.5771578s
	W0601 11:30:57.692608     720 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	W0601 11:30:57.692608     720 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	I0601 11:30:57.692608     720 start.go:614] Will try again in 5 seconds ...
	I0601 11:31:02.707226     720 start.go:352] acquiring machines lock for newest-cni-20220601112753-9404: {Name:mka9c5833b483068b0a73f6342d879a5ebe04326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:31:02.707226     720 start.go:356] acquired machines lock for "newest-cni-20220601112753-9404" in 0s
	I0601 11:31:02.707226     720 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:31:02.707762     720 fix.go:55] fixHost starting: 
	I0601 11:31:02.723136     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:03.833872     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:03.833872     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1107235s)
	I0601 11:31:03.833872     720 fix.go:103] recreateIfNeeded on newest-cni-20220601112753-9404: state= err=unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:03.833872     720 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:31:03.838464     720 out.go:177] * docker "newest-cni-20220601112753-9404" container is missing, will recreate.
	I0601 11:31:03.841465     720 delete.go:124] DEMOLISHING newest-cni-20220601112753-9404 ...
	I0601 11:31:03.860157     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:04.949110     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:04.949110     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0889407s)
	W0601 11:31:04.949110     720 stop.go:75] unable to get state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:04.949110     720 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:04.963295     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:06.062238     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:06.062391     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0987291s)
	I0601 11:31:06.062474     720 delete.go:82] Unable to get host status for newest-cni-20220601112753-9404, assuming it has already been deleted: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:06.070284     720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:31:07.174359     720 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:07.174359     720 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.1040636s)
	I0601 11:31:07.174359     720 kic.go:356] could not find the container newest-cni-20220601112753-9404 to remove it. will try anyways
	I0601 11:31:07.180378     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:08.224593     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:08.224593     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0442026s)
	W0601 11:31:08.224593     720 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:08.231915     720 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0"
	W0601 11:31:09.248443     720 cli_runner.go:211] docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:31:09.248443     720 cli_runner.go:217] Completed: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": (1.0165167s)
	I0601 11:31:09.248443     720 oci.go:625] error shutdown newest-cni-20220601112753-9404: docker exec --privileged -t newest-cni-20220601112753-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:10.272638     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:11.347533     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:11.347533     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0745848s)
	I0601 11:31:11.347610     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:11.347610     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:11.347610     720 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:11.852881     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:13.015072     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:13.015072     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1621253s)
	I0601 11:31:13.015072     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:13.015072     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:13.015072     720 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:13.617144     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:14.710000     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:14.710184     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.0927835s)
	I0601 11:31:14.710342     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:14.710388     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:14.710460     720 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:15.619892     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:16.740544     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:16.740544     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1206394s)
	I0601 11:31:16.740544     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:16.740544     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:16.740544     720 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:18.751253     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:19.875845     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:19.875845     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.12458s)
	I0601 11:31:19.875845     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:19.875845     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:19.875845     720 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:21.722648     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:22.852589     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:22.852589     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1299286s)
	I0601 11:31:22.852589     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:22.852589     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:22.852589     720 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:25.543989     720 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:31:26.655098     720 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:26.655146     720 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (1.1104151s)
	I0601 11:31:26.655182     720 oci.go:637] temporary error verifying shutdown: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:26.655242     720 oci.go:639] temporary error: container newest-cni-20220601112753-9404 status is  but expect it to be exited
	I0601 11:31:26.655301     720 oci.go:88] couldn't shut down newest-cni-20220601112753-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	 
	I0601 11:31:26.662735     720 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220601112753-9404
	I0601 11:31:27.752708     720 cli_runner.go:217] Completed: docker rm -f -v newest-cni-20220601112753-9404: (1.0899601s)
	I0601 11:31:27.760723     720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404
	W0601 11:31:28.894104     720 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:28.894104     720 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} newest-cni-20220601112753-9404: (1.1333681s)
	I0601 11:31:28.898113     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:30.023494     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:30.023494     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1252713s)
	I0601 11:31:30.031371     720 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:31:30.031371     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:31:31.165814     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:31.165879     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (1.1343352s)
	I0601 11:31:31.165906     720 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:31:31.165906     720 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	W0601 11:31:31.166791     720 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:31:31.166791     720 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:31:32.174768     720 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:31:32.288668     720 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:31:32.289498     720 start.go:165] libmachine.API.Create for "newest-cni-20220601112753-9404" (driver="docker")
	I0601 11:31:32.289498     720 client.go:168] LocalClient.Create starting
	I0601 11:31:32.290069     720 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:31:32.290254     720 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:32.290352     720 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:32.290547     720 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:31:32.290737     720 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:32.290737     720 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:32.321019     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:33.515371     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:33.515491     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1942075s)
	I0601 11:31:33.523530     720 network_create.go:272] running [docker network inspect newest-cni-20220601112753-9404] to gather additional debugging logs...
	I0601 11:31:33.523530     720 cli_runner.go:164] Run: docker network inspect newest-cni-20220601112753-9404
	W0601 11:31:35.771256     720 cli_runner.go:211] docker network inspect newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:35.771256     720 cli_runner.go:217] Completed: docker network inspect newest-cni-20220601112753-9404: (2.2477004s)
	I0601 11:31:35.771256     720 network_create.go:275] error running [docker network inspect newest-cni-20220601112753-9404]: docker network inspect newest-cni-20220601112753-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220601112753-9404
	I0601 11:31:35.771256     720 network_create.go:277] output of [docker network inspect newest-cni-20220601112753-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220601112753-9404
	
	** /stderr **
	I0601 11:31:35.777250     720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:31:38.443164     720 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (2.6658838s)
	I0601 11:31:38.464154     720 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062c8] amended:false}} dirty:map[] misses:0}
	I0601 11:31:38.464154     720 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:38.484167     720 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062c8] amended:true}} dirty:map[192.168.49.0:0xc0000062c8 192.168.58.0:0xc000006540] misses:0}
	I0601 11:31:38.484167     720 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:38.484167     720 network_create.go:115] attempt to create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:31:38.490131     720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404
	W0601 11:31:39.585426     720 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:39.585426     720 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: (1.0952821s)
	E0601 11:31:39.585426     720 network_create.go:104] error while trying to create docker network newest-cni-20220601112753-9404 192.168.58.0/24: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a89d066ff64d2d08c7cd69c4bd80f20ce5707c55a1dc5792b0b2c9e227406674 (br-a89d066ff64d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:31:39.585426     720 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a89d066ff64d2d08c7cd69c4bd80f20ce5707c55a1dc5792b0b2c9e227406674 (br-a89d066ff64d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220601112753-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a89d066ff64d2d08c7cd69c4bd80f20ce5707c55a1dc5792b0b2c9e227406674 (br-a89d066ff64d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:31:39.599427     720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:31:40.738303     720 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1388632s)
	I0601 11:31:40.748299     720 cli_runner.go:164] Run: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:31:41.873574     720 cli_runner.go:211] docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:31:41.873574     720 cli_runner.go:217] Completed: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1250688s)
	I0601 11:31:41.873664     720 client.go:171] LocalClient.Create took 9.5839658s
	I0601 11:31:43.891682     720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:43.897836     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:45.025822     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:45.025901     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1279219s)
	I0601 11:31:45.026094     720 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:45.313207     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:46.453506     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:46.453506     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1402858s)
	W0601 11:31:46.453506     720 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:31:46.454695     720 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:46.464483     720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:46.470489     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:47.575343     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:47.575343     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1018463s)
	I0601 11:31:47.575343     720 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:47.784001     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:48.881041     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:48.881065     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.096839s)
	W0601 11:31:48.881425     720 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:31:48.881490     720 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:48.881490     720 start.go:134] duration metric: createHost completed in 16.7063406s
	I0601 11:31:48.896055     720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:48.904044     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:50.009641     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:50.009641     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1055843s)
	I0601 11:31:50.009641     720 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:50.331719     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:51.422480     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:51.422480     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0907485s)
	W0601 11:31:51.422821     720 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:31:51.422875     720 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:51.432613     720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:51.439854     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:52.589321     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:52.589321     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.1494549s)
	I0601 11:31:52.589321     720 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:52.944055     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404
	W0601 11:31:54.023385     720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404 returned with exit code 1
	I0601 11:31:54.023385     720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: (1.0793174s)
	W0601 11:31:54.023385     720 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:31:54.023385     720 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220601112753-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601112753-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	I0601 11:31:54.023385     720 fix.go:57] fixHost completed within 51.3150528s
	I0601 11:31:54.023385     720 start.go:81] releasing machines lock for "newest-cni-20220601112753-9404", held for 51.3155888s
	W0601 11:31:54.023385     720 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220601112753-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220601112753-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	I0601 11:31:54.028531     720 out.go:177] 
	W0601 11:31:54.030389     720 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220601112753-9404 container: docker volume create newest-cni-20220601112753-9404 --label name.minikube.sigs.k8s.io=newest-cni-20220601112753-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220601112753-9404: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220601112753-9404': mkdir /var/lib/docker/volumes/newest-cni-20220601112753-9404: read-only file system
	
	W0601 11:31:54.030389     720 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:31:54.031380     720 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:31:54.035177     720 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-20220601112753-9404 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1626728s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (3.0361607s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:31:58.464042    9440 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (122.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (77.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220601112038-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220601112038-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 60 (1m17.8479945s)

                                                
                                                
-- stdout --
	* [cilium-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220601112038-9404 in cluster cilium-20220601112038-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220601112038-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:30:02.000238    3760 out.go:296] Setting OutFile to fd 1424 ...
	I0601 11:30:02.061609    3760 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:02.061609    3760 out.go:309] Setting ErrFile to fd 1564...
	I0601 11:30:02.061609    3760 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:02.073921    3760 out.go:303] Setting JSON to false
	I0601 11:30:02.077238    3760 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14937,"bootTime":1654068065,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:30:02.077382    3760 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:30:02.083123    3760 out.go:177] * [cilium-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:30:02.086786    3760 notify.go:193] Checking for updates...
	I0601 11:30:02.088775    3760 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:30:02.094177    3760 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:30:02.094177    3760 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:30:02.098660    3760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:30:02.102600    3760 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:02.102600    3760 config.go:178] Loaded profile config "kindnet-20220601112030-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:02.103605    3760 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:02.103605    3760 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:02.103605    3760 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:30:04.806525    3760 docker.go:137] docker version: linux-20.10.14
	I0601 11:30:04.814544    3760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:06.933714    3760 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1191458s)
	I0601 11:30:06.934498    3760 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-01 11:30:05.881043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:06.937811    3760 out.go:177] * Using the docker driver based on user configuration
	I0601 11:30:06.941782    3760 start.go:284] selected driver: docker
	I0601 11:30:06.941782    3760 start.go:806] validating driver "docker" against <nil>
	I0601 11:30:06.941782    3760 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:30:07.100132    3760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:09.265532    3760 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1653751s)
	I0601 11:30:09.265532    3760 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:08.1667083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:09.265532    3760 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:30:09.266454    3760 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:30:09.269513    3760 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:30:09.271506    3760 cni.go:95] Creating CNI manager for "cilium"
	I0601 11:30:09.271506    3760 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0601 11:30:09.271506    3760 start_flags.go:306] config:
	{Name:cilium-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601112038-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:09.275485    3760 out.go:177] * Starting control plane node cilium-20220601112038-9404 in cluster cilium-20220601112038-9404
	I0601 11:30:09.277314    3760 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:30:09.279497    3760 out.go:177] * Pulling base image ...
	I0601 11:30:09.282458    3760 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:30:09.282537    3760 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:30:09.282537    3760 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:30:09.282537    3760 cache.go:57] Caching tarball of preloaded images
	I0601 11:30:09.282537    3760 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:30:09.283454    3760 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:30:09.283454    3760 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220601112038-9404\config.json ...
	I0601 11:30:09.283454    3760 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220601112038-9404\config.json: {Name:mka96dd38f4f549b0018dd47426ab980af58f058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:30:10.425120    3760 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:30:10.425120    3760 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:10.425120    3760 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:10.425120    3760 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:30:10.425910    3760 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:30:10.425910    3760 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:30:10.425910    3760 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:30:10.425910    3760 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:30:10.425910    3760 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:12.838207    3760 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:30:12.838298    3760 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:30:12.838469    3760 start.go:352] acquiring machines lock for cilium-20220601112038-9404: {Name:mkb71a410542888e015e80e8facb3d0596f789b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:12.838672    3760 start.go:356] acquired machines lock for "cilium-20220601112038-9404" in 164.5µs
	I0601 11:30:12.838941    3760 start.go:91] Provisioning new machine with config: &{Name:cilium-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220601112038-9404 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:30:12.839048    3760 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:30:12.843581    3760 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:30:12.844126    3760 start.go:165] libmachine.API.Create for "cilium-20220601112038-9404" (driver="docker")
	I0601 11:30:12.844252    3760 client.go:168] LocalClient.Create starting
	I0601 11:30:12.844880    3760 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:30:12.844949    3760 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:12.844949    3760 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:12.844949    3760 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:30:12.844949    3760 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:12.844949    3760 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:12.853290    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:13.953021    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:13.953123    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0996871s)
	I0601 11:30:13.961508    3760 network_create.go:272] running [docker network inspect cilium-20220601112038-9404] to gather additional debugging logs...
	I0601 11:30:13.962035    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404
	W0601 11:30:15.023298    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:15.023298    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404: (1.0612511s)
	I0601 11:30:15.023298    3760 network_create.go:275] error running [docker network inspect cilium-20220601112038-9404]: docker network inspect cilium-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220601112038-9404
	I0601 11:30:15.023298    3760 network_create.go:277] output of [docker network inspect cilium-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220601112038-9404
	
	** /stderr **
	I0601 11:30:15.032984    3760 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:30:16.139052    3760 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1060051s)
	I0601 11:30:16.164757    3760 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058eed0] misses:0}
	I0601 11:30:16.164921    3760 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:30:16.164950    3760 network_create.go:115] attempt to create docker network cilium-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:30:16.171579    3760 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404
	W0601 11:30:17.295801    3760 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:17.295801    3760 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: (1.1242089s)
	E0601 11:30:17.295801    3760 network_create.go:104] error while trying to create docker network cilium-20220601112038-9404 192.168.49.0/24: create docker network cilium-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6c9fa1cff05e5d0aa3f53f3d3481b8a5ddc320c99e504060531660f62e8020d4 (br-6c9fa1cff05e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:30:17.295801    3760 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6c9fa1cff05e5d0aa3f53f3d3481b8a5ddc320c99e504060531660f62e8020d4 (br-6c9fa1cff05e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6c9fa1cff05e5d0aa3f53f3d3481b8a5ddc320c99e504060531660f62e8020d4 (br-6c9fa1cff05e): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:30:17.310795    3760 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:30:18.399439    3760 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0886321s)
	I0601 11:30:18.407781    3760 cli_runner.go:164] Run: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:30:19.507473    3760 cli_runner.go:211] docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:30:19.507537    3760 cli_runner.go:217] Completed: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0996796s)
	I0601 11:30:19.507603    3760 client.go:171] LocalClient.Create took 6.6632538s
	I0601 11:30:21.530886    3760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:30:21.536782    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:30:22.621628    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:22.621628    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0848334s)
	I0601 11:30:22.621628    3760 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:22.912220    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:30:23.974555    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:23.974555    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0623226s)
	W0601 11:30:23.974555    3760 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:30:23.974555    3760 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:23.985549    3760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:23.993503    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:30:25.104998    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:25.104998    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1114821s)
	I0601 11:30:25.104998    3760 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:25.415280    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:30:26.503256    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:26.503533    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0879644s)
	W0601 11:30:26.503624    3760 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:30:26.503624    3760 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:26.503624    3760 start.go:134] duration metric: createHost completed in 13.664313s
	I0601 11:30:26.503624    3760 start.go:81] releasing machines lock for "cilium-20220601112038-9404", held for 13.6647465s
	W0601 11:30:26.503624    3760 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	I0601 11:30:26.518268    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:27.587629    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:27.587629    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0693494s)
	I0601 11:30:27.587629    3760 delete.go:82] Unable to get host status for cilium-20220601112038-9404, assuming it has already been deleted: state: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	W0601 11:30:27.587629    3760 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	I0601 11:30:27.587629    3760 start.go:614] Will try again in 5 seconds ...
	I0601 11:30:32.590313    3760 start.go:352] acquiring machines lock for cilium-20220601112038-9404: {Name:mkb71a410542888e015e80e8facb3d0596f789b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:32.590920    3760 start.go:356] acquired machines lock for "cilium-20220601112038-9404" in 372.8µs
	I0601 11:30:32.591082    3760 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:30:32.591161    3760 fix.go:55] fixHost starting: 
	I0601 11:30:32.604273    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:33.723521    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:33.723577    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.1191553s)
	I0601 11:30:33.723703    3760 fix.go:103] recreateIfNeeded on cilium-20220601112038-9404: state= err=unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:33.723703    3760 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:30:33.728733    3760 out.go:177] * docker "cilium-20220601112038-9404" container is missing, will recreate.
	I0601 11:30:33.731155    3760 delete.go:124] DEMOLISHING cilium-20220601112038-9404 ...
	I0601 11:30:33.744696    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:34.820457    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:34.820457    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0755543s)
	W0601 11:30:34.820457    3760 stop.go:75] unable to get state: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:34.820457    3760 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:34.833910    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:35.908851    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:35.908851    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0749289s)
	I0601 11:30:35.908851    3760 delete.go:82] Unable to get host status for cilium-20220601112038-9404, assuming it has already been deleted: state: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:35.914862    3760 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220601112038-9404
	W0601 11:30:37.026317    3760 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:37.026317    3760 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220601112038-9404: (1.1104618s)
	I0601 11:30:37.026317    3760 kic.go:356] could not find the container cilium-20220601112038-9404 to remove it. will try anyways
	I0601 11:30:37.033776    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:38.092141    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:38.092141    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.058353s)
	W0601 11:30:38.092141    3760 oci.go:84] error getting container status, will try to delete anyways: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:38.099892    3760 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220601112038-9404 /bin/bash -c "sudo init 0"
	W0601 11:30:39.168176    3760 cli_runner.go:211] docker exec --privileged -t cilium-20220601112038-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:30:39.168176    3760 cli_runner.go:217] Completed: docker exec --privileged -t cilium-20220601112038-9404 /bin/bash -c "sudo init 0": (1.0682719s)
	I0601 11:30:39.168176    3760 oci.go:625] error shutdown cilium-20220601112038-9404: docker exec --privileged -t cilium-20220601112038-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:40.189558    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:41.247644    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:41.247644    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0580735s)
	I0601 11:30:41.247644    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:41.247644    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:41.247644    3760 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:41.723182    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:42.822794    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:42.822794    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0994456s)
	I0601 11:30:42.822890    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:42.822972    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:42.823013    3760 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:43.720088    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:44.797709    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:44.797709    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.077609s)
	I0601 11:30:44.797709    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:44.797709    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:44.797709    3760 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:45.445224    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:46.492932    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:46.492998    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0476966s)
	I0601 11:30:46.493103    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:46.493137    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:46.493170    3760 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:47.613499    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:48.678245    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:48.678245    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0647346s)
	I0601 11:30:48.678245    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:48.678245    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:48.678245    3760 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:50.209736    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:51.319971    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:51.320060    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.1102227s)
	I0601 11:30:51.320338    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:51.320338    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:51.320338    3760 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:54.371357    3760 cli_runner.go:164] Run: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:55.428086    3760 cli_runner.go:211] docker container inspect cilium-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:55.428226    3760 cli_runner.go:217] Completed: docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: (1.0557638s)
	I0601 11:30:55.428226    3760 oci.go:637] temporary error verifying shutdown: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:30:55.428226    3760 oci.go:639] temporary error: container cilium-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:55.428226    3760 oci.go:88] couldn't shut down cilium-20220601112038-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20220601112038-9404": docker container inspect cilium-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	 
	I0601 11:30:55.435119    3760 cli_runner.go:164] Run: docker rm -f -v cilium-20220601112038-9404
	I0601 11:30:56.511805    3760 cli_runner.go:217] Completed: docker rm -f -v cilium-20220601112038-9404: (1.0765057s)
	I0601 11:30:56.519898    3760 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220601112038-9404
	W0601 11:30:57.614605    3760 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:57.614605    3760 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220601112038-9404: (1.0946668s)
	I0601 11:30:57.621661    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:58.704000    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:58.704074    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0811257s)
	I0601 11:30:58.711195    3760 network_create.go:272] running [docker network inspect cilium-20220601112038-9404] to gather additional debugging logs...
	I0601 11:30:58.711195    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404
	W0601 11:30:59.809771    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 returned with exit code 1
	I0601 11:30:59.809771    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404: (1.0985638s)
	I0601 11:30:59.809975    3760 network_create.go:275] error running [docker network inspect cilium-20220601112038-9404]: docker network inspect cilium-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220601112038-9404
	I0601 11:30:59.809975    3760 network_create.go:277] output of [docker network inspect cilium-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220601112038-9404
	
	** /stderr **
	W0601 11:30:59.810902    3760 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:30:59.810902    3760 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:31:00.822195    3760 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:31:00.826624    3760 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:31:00.826624    3760 start.go:165] libmachine.API.Create for "cilium-20220601112038-9404" (driver="docker")
	I0601 11:31:00.826624    3760 client.go:168] LocalClient.Create starting
	I0601 11:31:00.827368    3760 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:31:00.827368    3760 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:00.827942    3760 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:00.828084    3760 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:31:00.828084    3760 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:00.828084    3760 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:00.837858    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:01.911382    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:01.911382    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0735124s)
	I0601 11:31:01.918686    3760 network_create.go:272] running [docker network inspect cilium-20220601112038-9404] to gather additional debugging logs...
	I0601 11:31:01.918686    3760 cli_runner.go:164] Run: docker network inspect cilium-20220601112038-9404
	W0601 11:31:03.021131    3760 cli_runner.go:211] docker network inspect cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:03.021131    3760 cli_runner.go:217] Completed: docker network inspect cilium-20220601112038-9404: (1.1024336s)
	I0601 11:31:03.021131    3760 network_create.go:275] error running [docker network inspect cilium-20220601112038-9404]: docker network inspect cilium-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220601112038-9404
	I0601 11:31:03.021131    3760 network_create.go:277] output of [docker network inspect cilium-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220601112038-9404
	
	** /stderr **
	I0601 11:31:03.028843    3760 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:31:04.143168    3760 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1139673s)
	I0601 11:31:04.160004    3760 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058eed0] amended:false}} dirty:map[] misses:0}
	I0601 11:31:04.160004    3760 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:04.175299    3760 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058eed0] amended:true}} dirty:map[192.168.49.0:0xc00058eed0 192.168.58.0:0xc00014eae0] misses:0}
	I0601 11:31:04.175299    3760 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:04.175299    3760 network_create.go:115] attempt to create docker network cilium-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:31:04.182604    3760 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404
	W0601 11:31:05.275081    3760 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:05.275081    3760 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: (1.0924653s)
	E0601 11:31:05.275081    3760 network_create.go:104] error while trying to create docker network cilium-20220601112038-9404 192.168.58.0/24: create docker network cilium-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b4df71db082916505fd72dd92920b1393eef4324667aeedd58755e8984a86e3 (br-7b4df71db082): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:31:05.275081    3760 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b4df71db082916505fd72dd92920b1393eef4324667aeedd58755e8984a86e3 (br-7b4df71db082): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b4df71db082916505fd72dd92920b1393eef4324667aeedd58755e8984a86e3 (br-7b4df71db082): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:31:05.289155    3760 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:31:06.389024    3760 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.099804s)
	I0601 11:31:06.396502    3760 cli_runner.go:164] Run: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:31:07.495029    3760 cli_runner.go:211] docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:31:07.495029    3760 cli_runner.go:217] Completed: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0985155s)
	I0601 11:31:07.495029    3760 client.go:171] LocalClient.Create took 6.6683312s
	I0601 11:31:09.511523    3760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:09.518428    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:10.606863    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:10.606863    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0883039s)
	I0601 11:31:10.607232    3760 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:10.960132    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:12.077662    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:12.077662    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1175176s)
	W0601 11:31:12.080756    3760 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:31:12.080814    3760 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:12.092056    3760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:12.099480    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:13.216061    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:13.216061    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1165686s)
	I0601 11:31:13.216061    3760 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:13.445871    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:14.557697    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:14.557742    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1116986s)
	W0601 11:31:14.558271    3760 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:31:14.558320    3760 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:14.558376    3760 start.go:134] duration metric: createHost completed in 13.7359857s
	I0601 11:31:14.572991    3760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:14.586019    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:15.688356    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:15.688356    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1022771s)
	I0601 11:31:15.688356    3760 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:15.955503    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:17.037929    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:17.037929    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0824143s)
	W0601 11:31:17.037929    3760 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:31:17.037929    3760 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:17.058866    3760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:17.066838    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:18.166673    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:18.166723    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.0995924s)
	I0601 11:31:18.166879    3760 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:18.380710    3760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404
	W0601 11:31:19.532723    3760 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404 returned with exit code 1
	I0601 11:31:19.532723    3760 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: (1.1520002s)
	W0601 11:31:19.532723    3760 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	
	W0601 11:31:19.532723    3760 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220601112038-9404
	I0601 11:31:19.532723    3760 fix.go:57] fixHost completed within 46.9410387s
	I0601 11:31:19.532723    3760 start.go:81] releasing machines lock for "cilium-20220601112038-9404", held for 46.9412796s
	W0601 11:31:19.532723    3760 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	I0601 11:31:19.538735    3760 out.go:177] 
	W0601 11:31:19.540735    3760 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220601112038-9404 container: docker volume create cilium-20220601112038-9404 --label name.minikube.sigs.k8s.io=cilium-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/cilium-20220601112038-9404': mkdir /var/lib/docker/volumes/cilium-20220601112038-9404: read-only file system
	
	W0601 11:31:19.540735    3760 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:31:19.540735    3760 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:31:19.543730    3760 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/cilium/Start (77.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (121.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: exit status 60 (1m57.047366s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220601112749-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220601112749-9404 in cluster default-k8s-different-port-20220601112749-9404
	* Pulling base image ...
	* docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:30:05.095802    9848 out.go:296] Setting OutFile to fd 1824 ...
	I0601 11:30:05.150347    9848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:05.150347    9848 out.go:309] Setting ErrFile to fd 2016...
	I0601 11:30:05.150347    9848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:05.165351    9848 out.go:303] Setting JSON to false
	I0601 11:30:05.168355    9848 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14940,"bootTime":1654068065,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:30:05.168355    9848 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:30:05.174347    9848 out.go:177] * [default-k8s-different-port-20220601112749-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:30:05.177347    9848 notify.go:193] Checking for updates...
	I0601 11:30:05.179351    9848 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:30:05.181347    9848 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:30:05.184347    9848 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:30:05.186350    9848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:30:05.189347    9848 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:05.190351    9848 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:30:07.945187    9848 docker.go:137] docker version: linux-20.10.14
	I0601 11:30:07.953293    9848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:10.127757    9848 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1744404s)
	I0601 11:30:10.128420    9848 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:09.0350463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:10.137389    9848 out.go:177] * Using the docker driver based on existing profile
	I0601 11:30:10.139333    9848 start.go:284] selected driver: docker
	I0601 11:30:10.139333    9848 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601112749-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601112749-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] List
enAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:10.139855    9848 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:30:10.204704    9848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:12.321836    9848 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1171085s)
	I0601 11:30:12.321836    9848 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-01 11:30:11.2679555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:12.321836    9848 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:30:12.322827    9848 cni.go:95] Creating CNI manager for ""
	I0601 11:30:12.322827    9848 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:30:12.322827    9848 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601112749-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601112749-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:12.331826    9848 out.go:177] * Starting control plane node default-k8s-different-port-20220601112749-9404 in cluster default-k8s-different-port-20220601112749-9404
	I0601 11:30:12.333833    9848 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:30:12.336824    9848 out.go:177] * Pulling base image ...
	I0601 11:30:12.339850    9848 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:30:12.339850    9848 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:30:12.339850    9848 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:30:12.339850    9848 cache.go:57] Caching tarball of preloaded images
	I0601 11:30:12.339850    9848 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:30:12.340833    9848 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:30:12.340833    9848 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220601112749-9404\config.json ...
	I0601 11:30:13.450786    9848 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:30:13.450786    9848 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:13.450786    9848 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:13.451303    9848 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:30:13.451389    9848 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:30:13.451389    9848 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:30:13.451389    9848 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:30:13.451389    9848 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:30:13.451389    9848 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:15.789772    9848 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:30:15.789772    9848 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:30:15.789772    9848 start.go:352] acquiring machines lock for default-k8s-different-port-20220601112749-9404: {Name:mk2d253a747261ca3a979b7941df8cd2b45f4516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:15.790493    9848 start.go:356] acquired machines lock for "default-k8s-different-port-20220601112749-9404" in 720.9µs
	I0601 11:30:15.790493    9848 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:30:15.790493    9848 fix.go:55] fixHost starting: 
	I0601 11:30:15.808082    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:16.920661    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:16.920661    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1125668s)
	I0601 11:30:16.920661    9848 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601112749-9404: state= err=unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:16.920661    9848 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:30:16.923559    9848 out.go:177] * docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	I0601 11:30:16.928939    9848 delete.go:124] DEMOLISHING default-k8s-different-port-20220601112749-9404 ...
	I0601 11:30:16.941351    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:18.084124    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:18.084202    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1425933s)
	W0601 11:30:18.084262    9848 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:18.084326    9848 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:18.098784    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:19.193130    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:19.193281    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0941634s)
	I0601 11:30:19.193350    9848 delete.go:82] Unable to get host status for default-k8s-different-port-20220601112749-9404, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:19.202773    9848 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:30:20.264289    9848 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:20.264289    9848 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.061462s)
	I0601 11:30:20.264289    9848 kic.go:356] could not find the container default-k8s-different-port-20220601112749-9404 to remove it. will try anyways
	I0601 11:30:20.270173    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:21.349428    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:21.349618    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0792143s)
	W0601 11:30:21.349743    9848 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:21.359032    9848 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0"
	W0601 11:30:22.436025    9848 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:30:22.436025    9848 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": (1.0769813s)
	I0601 11:30:22.436025    9848 oci.go:625] error shutdown default-k8s-different-port-20220601112749-9404: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:23.451931    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:24.557532    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:24.557532    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1055891s)
	I0601 11:30:24.557532    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:24.557532    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:24.557532    9848 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:25.130187    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:26.206216    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:26.206216    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0749986s)
	I0601 11:30:26.206216    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:26.206216    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:26.206216    9848 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:27.322003    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:28.393222    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:28.393222    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0711722s)
	I0601 11:30:28.393377    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:28.393377    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:28.393377    9848 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:29.716220    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:30.765696    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:30.765696    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0492628s)
	I0601 11:30:30.765873    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:30.765950    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:30.765950    9848 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:32.362332    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:33.457868    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:33.457905    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0954083s)
	I0601 11:30:33.458006    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:33.458072    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:33.458072    9848 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:35.808693    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:36.930197    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:36.930197    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1214654s)
	I0601 11:30:36.930197    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:36.930197    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:36.930197    9848 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:41.460700    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:30:42.552544    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:42.552621    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0917002s)
	I0601 11:30:42.552836    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:42.552836    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:30:42.552959    9848 oci.go:88] couldn't shut down default-k8s-different-port-20220601112749-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	 
	I0601 11:30:42.561140    9848 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220601112749-9404
	I0601 11:30:43.604655    9848 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220601112749-9404: (1.0434391s)
	I0601 11:30:43.611681    9848 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:30:44.688778    9848 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:44.688778    9848 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.0768291s)
	I0601 11:30:44.697786    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:45.750746    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:45.750799    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0529113s)
	I0601 11:30:45.757606    9848 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:30:45.757606    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:30:46.809576    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:46.809831    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.0519579s)
	I0601 11:30:46.809831    9848 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:30:46.809882    9848 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	W0601 11:30:46.811428    9848 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:30:46.811428    9848 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:30:47.823387    9848 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:30:47.829229    9848 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:30:47.829341    9848 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601112749-9404" (driver="docker")
	I0601 11:30:47.829341    9848 client.go:168] LocalClient.Create starting
	I0601 11:30:47.829957    9848 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:30:47.829957    9848 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:47.829957    9848 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:47.829957    9848 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:30:47.830684    9848 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:47.830684    9848 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:47.837800    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:48.940910    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:48.940910    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1030982s)
	I0601 11:30:48.946884    9848 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:30:48.946884    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:30:50.030918    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:50.030918    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.0840217s)
	I0601 11:30:50.030918    9848 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:30:50.030918    9848 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	I0601 11:30:50.038349    9848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:30:51.164867    9848 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1262675s)
	I0601 11:30:51.182201    9848 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005944d0] misses:0}
	I0601 11:30:51.183220    9848 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:30:51.183220    9848 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:30:51.190513    9848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404
	W0601 11:30:52.277458    9848 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:52.277458    9848 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: (1.0869331s)
	E0601 11:30:52.277458    9848 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8f73cc5478242d2cff02a9339401a55c26a18a51c86c55435edaca068a0ffba (br-b8f73cc54782): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:30:52.277458    9848 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8f73cc5478242d2cff02a9339401a55c26a18a51c86c55435edaca068a0ffba (br-b8f73cc54782): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b8f73cc5478242d2cff02a9339401a55c26a18a51c86c55435edaca068a0ffba (br-b8f73cc54782): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:30:52.294985    9848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:30:53.406085    9848 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1110878s)
	I0601 11:30:53.414461    9848 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:30:54.485995    9848 cli_runner.go:211] docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:30:54.485995    9848 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0715229s)
	I0601 11:30:54.485995    9848 client.go:171] LocalClient.Create took 6.6565802s
	I0601 11:30:56.504933    9848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:30:56.511989    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:30:57.599612    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:57.599612    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0875819s)
	I0601 11:30:57.599612    9848 retry.go:31] will retry after 164.129813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:57.779316    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:30:58.844255    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:58.844317    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0643256s)
	W0601 11:30:58.844317    9848 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:30:58.844317    9848 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:30:58.853727    9848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:58.860771    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:30:59.950249    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:30:59.950249    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0894664s)
	I0601 11:30:59.950249    9848 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:00.165882    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:01.215875    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:01.215951    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0498207s)
	W0601 11:31:01.216080    9848 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:01.216080    9848 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:01.216080    9848 start.go:134] duration metric: createHost completed in 13.3925432s
	I0601 11:31:01.227005    9848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:01.232521    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:02.340548    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:02.340548    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1080148s)
	I0601 11:31:02.340548    9848 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:02.683352    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:03.801824    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:03.801824    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1184592s)
	W0601 11:31:03.801824    9848 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:03.801824    9848 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:03.811818    9848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:03.817874    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:04.917103    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:04.917103    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0992165s)
	I0601 11:31:04.917103    9848 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:05.160422    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:06.264716    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:06.264905    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1041252s)
	W0601 11:31:06.265075    9848 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:06.265075    9848 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:06.265075    9848 fix.go:57] fixHost completed within 50.4740179s
	I0601 11:31:06.265075    9848 start.go:81] releasing machines lock for "default-k8s-different-port-20220601112749-9404", held for 50.4740179s
	W0601 11:31:06.265075    9848 start.go:599] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	W0601 11:31:06.265757    9848 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	I0601 11:31:06.265757    9848 start.go:614] Will try again in 5 seconds ...
	I0601 11:31:11.270051    9848 start.go:352] acquiring machines lock for default-k8s-different-port-20220601112749-9404: {Name:mk2d253a747261ca3a979b7941df8cd2b45f4516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:31:11.270321    9848 start.go:356] acquired machines lock for "default-k8s-different-port-20220601112749-9404" in 224µs
	I0601 11:31:11.270510    9848 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:31:11.270555    9848 fix.go:55] fixHost starting: 
	I0601 11:31:11.292871    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:12.436996    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:12.436996    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1436843s)
	I0601 11:31:12.437119    9848 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601112749-9404: state= err=unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:12.437119    9848 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:31:12.440921    9848 out.go:177] * docker "default-k8s-different-port-20220601112749-9404" container is missing, will recreate.
	I0601 11:31:12.443867    9848 delete.go:124] DEMOLISHING default-k8s-different-port-20220601112749-9404 ...
	I0601 11:31:12.464097    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:13.578348    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:13.578348    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1142382s)
	W0601 11:31:13.578348    9848 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:13.578348    9848 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:13.591356    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:14.773793    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:14.773793    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1822265s)
	I0601 11:31:14.774145    9848 delete.go:82] Unable to get host status for default-k8s-different-port-20220601112749-9404, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:14.783004    9848 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:31:15.843800    9848 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:15.844032    9848 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.0607839s)
	I0601 11:31:15.844091    9848 kic.go:356] could not find the container default-k8s-different-port-20220601112749-9404 to remove it. will try anyways
	I0601 11:31:15.851399    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:16.944409    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:16.944409    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0929983s)
	W0601 11:31:16.944409    9848 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:16.950487    9848 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0"
	W0601 11:31:18.073652    9848 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:31:18.073652    9848 cli_runner.go:217] Completed: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": (1.123152s)
	I0601 11:31:18.073652    9848 oci.go:625] error shutdown default-k8s-different-port-20220601112749-9404: docker exec --privileged -t default-k8s-different-port-20220601112749-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:19.085587    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:20.172712    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:20.172880    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0860872s)
	I0601 11:31:20.172880    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:20.172880    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:20.172880    9848 retry.go:31] will retry after 484.444922ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:20.671609    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:21.805619    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:21.805619    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1339978s)
	I0601 11:31:21.805619    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:21.805619    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:21.805619    9848 retry.go:31] will retry after 587.275613ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:22.410485    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:23.521149    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:23.521149    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1096406s)
	I0601 11:31:23.521149    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:23.521149    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:23.521149    9848 retry.go:31] will retry after 892.239589ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:24.424095    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:25.505170    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:25.505170    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.0808698s)
	I0601 11:31:25.505441    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:25.505441    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:25.505441    9848 retry.go:31] will retry after 1.989705391s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:27.509538    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:28.626255    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:28.626255    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1167041s)
	I0601 11:31:28.626255    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:28.626255    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:28.626255    9848 retry.go:31] will retry after 1.818837414s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:30.463434    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:31.573574    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:31.573574    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.1101279s)
	I0601 11:31:31.573574    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:31.573574    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:31.573574    9848 retry.go:31] will retry after 2.669912672s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:34.260719    9848 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:31:35.786806    9848 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:35.786806    9848 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (1.5259042s)
	I0601 11:31:35.786806    9848 oci.go:637] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:35.786806    9848 oci.go:639] temporary error: container default-k8s-different-port-20220601112749-9404 status is  but expect it to be exited
	I0601 11:31:35.786806    9848 oci.go:88] couldn't shut down default-k8s-different-port-20220601112749-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	 
	I0601 11:31:35.794355    9848 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220601112749-9404
	I0601 11:31:38.521841    9848 cli_runner.go:217] Completed: docker rm -f -v default-k8s-different-port-20220601112749-9404: (2.7274563s)
	I0601 11:31:38.529496    9848 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404
	W0601 11:31:39.601428    9848 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:39.601428    9848 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} default-k8s-different-port-20220601112749-9404: (1.0719201s)
	I0601 11:31:39.610428    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:40.722299    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:40.722299    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1118579s)
	I0601 11:31:40.730302    9848 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:31:40.731302    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:31:41.857244    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:41.857244    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.1259293s)
	I0601 11:31:41.857244    9848 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:31:41.857244    9848 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	W0601 11:31:41.858239    9848 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:31:41.858239    9848 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:31:42.863548    9848 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:31:42.867752    9848 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:31:42.867913    9848 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220601112749-9404" (driver="docker")
	I0601 11:31:42.867913    9848 client.go:168] LocalClient.Create starting
	I0601 11:31:42.868693    9848 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:31:42.877287    9848 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:42.877379    9848 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:42.877543    9848 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:31:42.883516    9848 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:42.884478    9848 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:42.893113    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:44.036745    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:44.036805    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.143569s)
	I0601 11:31:44.043393    9848 network_create.go:272] running [docker network inspect default-k8s-different-port-20220601112749-9404] to gather additional debugging logs...
	I0601 11:31:44.043449    9848 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220601112749-9404
	W0601 11:31:45.148918    9848 cli_runner.go:211] docker network inspect default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:45.149109    9848 cli_runner.go:217] Completed: docker network inspect default-k8s-different-port-20220601112749-9404: (1.105387s)
	I0601 11:31:45.149160    9848 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220601112749-9404]: docker network inspect default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220601112749-9404
	I0601 11:31:45.149193    9848 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220601112749-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220601112749-9404
	
	** /stderr **
	I0601 11:31:45.157276    9848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:31:46.254879    9848 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0975915s)
	I0601 11:31:46.271407    9848 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005944d0] amended:false}} dirty:map[] misses:0}
	I0601 11:31:46.271407    9848 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:46.286567    9848 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005944d0] amended:true}} dirty:map[192.168.49.0:0xc0005944d0 192.168.58.0:0xc0001aca20] misses:0}
	I0601 11:31:46.286567    9848 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:46.286567    9848 network_create.go:115] attempt to create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:31:46.294434    9848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404
	W0601 11:31:47.420266    9848 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:47.420310    9848 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: (1.1257236s)
	E0601 11:31:47.420399    9848 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1e4f9779cdc34d7f77d4df815fedbe7610960ba15f97fcb3d25aa8c6a7ef6bff (br-1e4f9779cdc3): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:31:47.420700    9848 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1e4f9779cdc34d7f77d4df815fedbe7610960ba15f97fcb3d25aa8c6a7ef6bff (br-1e4f9779cdc3): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220601112749-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1e4f9779cdc34d7f77d4df815fedbe7610960ba15f97fcb3d25aa8c6a7ef6bff (br-1e4f9779cdc3): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:31:47.439198    9848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:31:48.584438    9848 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1450913s)
	I0601 11:31:48.590849    9848 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:31:49.747610    9848 cli_runner.go:211] docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:31:49.747610    9848 cli_runner.go:217] Completed: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: (1.156748s)
	I0601 11:31:49.747610    9848 client.go:171] LocalClient.Create took 6.8796204s
	I0601 11:31:51.762936    9848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:51.769123    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:52.903631    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:52.903686    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1343008s)
	I0601 11:31:52.903862    9848 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:53.194013    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:54.274814    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:54.274814    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0807897s)
	W0601 11:31:54.274814    9848 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:54.274814    9848 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:54.284820    9848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:54.290840    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:55.382980    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:55.382980    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.0921283s)
	I0601 11:31:55.382980    9848 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:55.591988    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:56.711803    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:56.711885    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1195745s)
	W0601 11:31:56.711885    9848 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:56.711885    9848 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:56.711885    9848 start.go:134] duration metric: createHost completed in 13.8481835s
	I0601 11:31:56.725202    9848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:56.732056    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:57.836094    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:57.836094    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1034512s)
	I0601 11:31:57.836094    9848 retry.go:31] will retry after 313.143259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:58.156488    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:31:59.276783    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:31:59.276964    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1199115s)
	W0601 11:31:59.277057    9848 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:31:59.277131    9848 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:31:59.287059    9848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:59.293110    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:32:00.402330    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:32:00.402330    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1092082s)
	I0601 11:32:00.402330    9848 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:32:00.752337    9848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404
	W0601 11:32:01.873045    9848 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404 returned with exit code 1
	I0601 11:32:01.873104    9848 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: (1.1196015s)
	W0601 11:32:01.873511    9848 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:32:01.873570    9848 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220601112749-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601112749-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	I0601 11:32:01.873616    9848 fix.go:57] fixHost completed within 50.6024992s
	I0601 11:32:01.873616    9848 start.go:81] releasing machines lock for "default-k8s-different-port-20220601112749-9404", held for 50.6026591s
	W0601 11:32:01.874328    9848 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220601112749-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220601112749-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	I0601 11:32:01.879002    9848 out.go:177] 
	W0601 11:32:01.881054    9848 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220601112749-9404 container: docker volume create default-k8s-different-port-20220601112749-9404 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220601112749-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220601112749-9404: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220601112749-9404: read-only file system
	
	W0601 11:32:01.881583    9848 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:32:01.881769    9848 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:32:01.885892    9848 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220601112749-9404 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.171346s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.995711s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:06.237140    8220 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (121.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220601112038-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220601112038-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 60 (1m20.1700948s)

                                                
                                                
-- stdout --
	* [calico-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220601112038-9404 in cluster calico-20220601112038-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220601112038-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:30:17.537691    7960 out.go:296] Setting OutFile to fd 1816 ...
	I0601 11:30:17.599731    7960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:17.599731    7960 out.go:309] Setting ErrFile to fd 1572...
	I0601 11:30:17.599731    7960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:30:17.611739    7960 out.go:303] Setting JSON to false
	I0601 11:30:17.613727    7960 start.go:115] hostinfo: {"hostname":"minikube2","uptime":14953,"bootTime":1654068064,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:30:17.613727    7960 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:30:17.616625    7960 out.go:177] * [calico-20220601112038-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:30:17.620121    7960 notify.go:193] Checking for updates...
	I0601 11:30:17.622998    7960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:30:17.625746    7960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:30:17.627955    7960 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:30:17.632011    7960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:30:17.635565    7960 config.go:178] Loaded profile config "cilium-20220601112038-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:17.635709    7960 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:17.636480    7960 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:17.636864    7960 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:30:17.637017    7960 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:30:20.358359    7960 docker.go:137] docker version: linux-20.10.14
	I0601 11:30:20.366894    7960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:22.467598    7960 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1003523s)
	I0601 11:30:22.468161    7960 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:21.4021117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:22.471104    7960 out.go:177] * Using the docker driver based on user configuration
	I0601 11:30:22.480524    7960 start.go:284] selected driver: docker
	I0601 11:30:22.480524    7960 start.go:806] validating driver "docker" against <nil>
	I0601 11:30:22.480643    7960 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:30:22.548893    7960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:30:24.620269    7960 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0713522s)
	I0601 11:30:24.620269    7960 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:30:23.5444826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:30:24.620917    7960 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:30:24.621557    7960 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:30:24.626661    7960 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:30:24.629783    7960 cni.go:95] Creating CNI manager for "calico"
	I0601 11:30:24.629783    7960 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0601 11:30:24.629783    7960 start_flags.go:306] config:
	{Name:calico-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220601112038-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:30:24.633449    7960 out.go:177] * Starting control plane node calico-20220601112038-9404 in cluster calico-20220601112038-9404
	I0601 11:30:24.635629    7960 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:30:24.640267    7960 out.go:177] * Pulling base image ...
	I0601 11:30:24.642956    7960 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:30:24.642956    7960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:30:24.642956    7960 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:30:24.642956    7960 cache.go:57] Caching tarball of preloaded images
	I0601 11:30:24.643651    7960 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:30:24.643651    7960 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:30:24.643651    7960 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220601112038-9404\config.json ...
	I0601 11:30:24.643651    7960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220601112038-9404\config.json: {Name:mkc79bd6193e13c82fedae48f06587568f41af6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:30:25.701925    7960 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:30:25.701925    7960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:25.701925    7960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:25.701925    7960 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:30:25.701925    7960 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:30:25.701925    7960 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:30:25.701925    7960 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:30:25.701925    7960 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:30:25.701925    7960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:30:28.096330    7960 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:30:28.096426    7960 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:30:28.096597    7960 start.go:352] acquiring machines lock for calico-20220601112038-9404: {Name:mk7e927e236f76148803c49e53c3477994c68a1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:28.096818    7960 start.go:356] acquired machines lock for "calico-20220601112038-9404" in 160.2µs
	I0601 11:30:28.097065    7960 start.go:91] Provisioning new machine with config: &{Name:calico-20220601112038-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220601112038-9404 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:30:28.097156    7960 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:30:28.101114    7960 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:30:28.101187    7960 start.go:165] libmachine.API.Create for "calico-20220601112038-9404" (driver="docker")
	I0601 11:30:28.101187    7960 client.go:168] LocalClient.Create starting
	I0601 11:30:28.101809    7960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:30:28.101809    7960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:28.101809    7960 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:28.102391    7960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:30:28.102391    7960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:30:28.102391    7960 main.go:134] libmachine: Parsing certificate...
	I0601 11:30:28.112500    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:30:29.170134    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:30:29.170134    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0576218s)
	I0601 11:30:29.177131    7960 network_create.go:272] running [docker network inspect calico-20220601112038-9404] to gather additional debugging logs...
	I0601 11:30:29.177131    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404
	W0601 11:30:30.215313    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:30.215313    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404: (1.0379261s)
	I0601 11:30:30.215313    7960 network_create.go:275] error running [docker network inspect calico-20220601112038-9404]: docker network inspect calico-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220601112038-9404
	I0601 11:30:30.215313    7960 network_create.go:277] output of [docker network inspect calico-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220601112038-9404
	
	** /stderr **
	I0601 11:30:30.222325    7960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:30:31.311278    7960 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.088941s)
	I0601 11:30:31.331808    7960 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00055ca60] misses:0}
	I0601 11:30:31.332685    7960 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:30:31.332916    7960 network_create.go:115] attempt to create docker network calico-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:30:31.341954    7960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404
	W0601 11:30:32.353865    7960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:32.354158    7960 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: (1.0118998s)
	E0601 11:30:32.354241    7960 network_create.go:104] error while trying to create docker network calico-20220601112038-9404 192.168.49.0/24: create docker network calico-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 80599e26ebf5a8856f53758c8fd3803199cc52e71fec87d4ce89a63a390d8bea (br-80599e26ebf5): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:30:32.354527    7960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 80599e26ebf5a8856f53758c8fd3803199cc52e71fec87d4ce89a63a390d8bea (br-80599e26ebf5): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220601112038-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 80599e26ebf5a8856f53758c8fd3803199cc52e71fec87d4ce89a63a390d8bea (br-80599e26ebf5): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:30:32.369507    7960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:30:33.473133    7960 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1024921s)
	I0601 11:30:33.480724    7960 cli_runner.go:164] Run: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:30:34.569081    7960 cli_runner.go:211] docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:30:34.569132    7960 cli_runner.go:217] Completed: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.088187s)
	I0601 11:30:34.569132    7960 client.go:171] LocalClient.Create took 6.4678726s
	I0601 11:30:36.581054    7960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:30:36.587228    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:30:37.667811    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:37.667870    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.080387s)
	I0601 11:30:37.667870    7960 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:37.959515    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:30:39.041858    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:39.041858    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.0823307s)
	W0601 11:30:39.041858    7960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:30:39.041858    7960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:39.050858    7960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:30:39.053997    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:30:40.102033    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:40.102089    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.0478576s)
	I0601 11:30:40.102243    7960 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:40.414609    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:30:41.511329    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:41.511329    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.0966738s)
	W0601 11:30:41.511329    7960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:30:41.511329    7960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:41.511329    7960 start.go:134] duration metric: createHost completed in 13.4139165s
	I0601 11:30:41.511329    7960 start.go:81] releasing machines lock for "calico-20220601112038-9404", held for 13.414361s
	W0601 11:30:41.511329    7960 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	I0601 11:30:41.525301    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:42.648511    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:42.648511    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1231969s)
	I0601 11:30:42.648511    7960 delete.go:82] Unable to get host status for calico-20220601112038-9404, assuming it has already been deleted: state: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	W0601 11:30:42.648511    7960 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	I0601 11:30:42.648511    7960 start.go:614] Will try again in 5 seconds ...
	I0601 11:30:47.653259    7960 start.go:352] acquiring machines lock for calico-20220601112038-9404: {Name:mk7e927e236f76148803c49e53c3477994c68a1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:30:47.653658    7960 start.go:356] acquired machines lock for "calico-20220601112038-9404" in 331.7µs
	I0601 11:30:47.653658    7960 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:30:47.653658    7960 fix.go:55] fixHost starting: 
	I0601 11:30:47.668875    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:48.740826    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:48.740826    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0717303s)
	I0601 11:30:48.740826    7960 fix.go:103] recreateIfNeeded on calico-20220601112038-9404: state= err=unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:48.740826    7960 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:30:48.769123    7960 out.go:177] * docker "calico-20220601112038-9404" container is missing, will recreate.
	I0601 11:30:48.772342    7960 delete.go:124] DEMOLISHING calico-20220601112038-9404 ...
	I0601 11:30:48.785586    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:49.845177    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:49.845177    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0595796s)
	W0601 11:30:49.845177    7960 stop.go:75] unable to get state: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:49.845177    7960 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:49.868762    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:50.976709    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:50.976849    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1078007s)
	I0601 11:30:50.976962    7960 delete.go:82] Unable to get host status for calico-20220601112038-9404, assuming it has already been deleted: state: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:50.983862    7960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220601112038-9404
	W0601 11:30:52.091647    7960 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220601112038-9404 returned with exit code 1
	I0601 11:30:52.091647    7960 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220601112038-9404: (1.1077725s)
	I0601 11:30:52.091647    7960 kic.go:356] could not find the container calico-20220601112038-9404 to remove it. will try anyways
	I0601 11:30:52.101082    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:53.204717    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:53.204961    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1036222s)
	W0601 11:30:53.205036    7960 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:53.212457    7960 cli_runner.go:164] Run: docker exec --privileged -t calico-20220601112038-9404 /bin/bash -c "sudo init 0"
	W0601 11:30:54.283394    7960 cli_runner.go:211] docker exec --privileged -t calico-20220601112038-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:30:54.283394    7960 cli_runner.go:217] Completed: docker exec --privileged -t calico-20220601112038-9404 /bin/bash -c "sudo init 0": (1.0709252s)
	I0601 11:30:54.283394    7960 oci.go:625] error shutdown calico-20220601112038-9404: docker exec --privileged -t calico-20220601112038-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:55.291766    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:56.339651    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:56.339651    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0478731s)
	I0601 11:30:56.339651    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:56.339651    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:56.339651    7960 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:56.815698    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:57.896428    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:57.896657    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0802018s)
	I0601 11:30:57.896657    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:57.896657    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:57.896657    7960 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:58.807037    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:30:59.919195    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:30:59.919263    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1120558s)
	I0601 11:30:59.919312    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:30:59.919312    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:30:59.919312    7960 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:00.576977    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:31:01.661010    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:01.661010    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0840211s)
	I0601 11:31:01.661010    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:01.661010    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:31:01.661010    7960 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:02.778928    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:31:03.910083    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:03.910083    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1311425s)
	I0601 11:31:03.910083    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:03.910083    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:31:03.910083    7960 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:05.439518    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:31:06.558230    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:06.558230    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.1186998s)
	I0601 11:31:06.558230    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:06.558230    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:31:06.558230    7960 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:09.617541    7960 cli_runner.go:164] Run: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}
	W0601 11:31:10.714386    7960 cli_runner.go:211] docker container inspect calico-20220601112038-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:31:10.714386    7960 cli_runner.go:217] Completed: docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: (1.0965604s)
	I0601 11:31:10.714492    7960 oci.go:637] temporary error verifying shutdown: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:10.714580    7960 oci.go:639] temporary error: container calico-20220601112038-9404 status is  but expect it to be exited
	I0601 11:31:10.714676    7960 oci.go:88] couldn't shut down calico-20220601112038-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20220601112038-9404": docker container inspect calico-20220601112038-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	 
	I0601 11:31:10.721512    7960 cli_runner.go:164] Run: docker rm -f -v calico-20220601112038-9404
	I0601 11:31:11.874575    7960 cli_runner.go:217] Completed: docker rm -f -v calico-20220601112038-9404: (1.1530495s)
	I0601 11:31:11.882965    7960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220601112038-9404
	W0601 11:31:13.046880    7960 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:13.046926    7960 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220601112038-9404: (1.1638364s)
	I0601 11:31:13.054242    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:14.134441    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:14.134441    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0796098s)
	I0601 11:31:14.140430    7960 network_create.go:272] running [docker network inspect calico-20220601112038-9404] to gather additional debugging logs...
	I0601 11:31:14.140430    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404
	W0601 11:31:15.210655    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:15.210702    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404: (1.0701229s)
	I0601 11:31:15.210730    7960 network_create.go:275] error running [docker network inspect calico-20220601112038-9404]: docker network inspect calico-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220601112038-9404
	I0601 11:31:15.210859    7960 network_create.go:277] output of [docker network inspect calico-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220601112038-9404
	
	** /stderr **
	W0601 11:31:15.212189    7960 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:31:15.212236    7960 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:31:16.222202    7960 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:31:16.227301    7960 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:31:16.227301    7960 start.go:165] libmachine.API.Create for "calico-20220601112038-9404" (driver="docker")
	I0601 11:31:16.227848    7960 client.go:168] LocalClient.Create starting
	I0601 11:31:16.228011    7960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:31:16.228011    7960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:16.228573    7960 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:16.228823    7960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:31:16.228823    7960 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:16.228823    7960 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:16.239084    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:17.352262    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:17.352262    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1131656s)
	I0601 11:31:17.360270    7960 network_create.go:272] running [docker network inspect calico-20220601112038-9404] to gather additional debugging logs...
	I0601 11:31:17.360270    7960 cli_runner.go:164] Run: docker network inspect calico-20220601112038-9404
	W0601 11:31:18.448922    7960 cli_runner.go:211] docker network inspect calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:18.448922    7960 cli_runner.go:217] Completed: docker network inspect calico-20220601112038-9404: (1.0884434s)
	I0601 11:31:18.449013    7960 network_create.go:275] error running [docker network inspect calico-20220601112038-9404]: docker network inspect calico-20220601112038-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220601112038-9404
	I0601 11:31:18.449013    7960 network_create.go:277] output of [docker network inspect calico-20220601112038-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220601112038-9404
	
	** /stderr **
	I0601 11:31:18.455195    7960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:31:19.548740    7960 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0935333s)
	I0601 11:31:19.572324    7960 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ca60] amended:false}} dirty:map[] misses:0}
	I0601 11:31:19.572425    7960 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:19.591986    7960 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055ca60] amended:true}} dirty:map[192.168.49.0:0xc00055ca60 192.168.58.0:0xc001084140] misses:0}
	I0601 11:31:19.591986    7960 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:19.591986    7960 network_create.go:115] attempt to create docker network calico-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:31:19.605084    7960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404
	W0601 11:31:20.806706    7960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:20.806799    7960 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: (1.2015128s)
	E0601 11:31:20.806799    7960 network_create.go:104] error while trying to create docker network calico-20220601112038-9404 192.168.58.0/24: create docker network calico-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b04fba056185ecf98dc6a497fcc8e32fe8a886b75a9b403151d8e71fa7e362e3 (br-b04fba056185): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:31:20.806799    7960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b04fba056185ecf98dc6a497fcc8e32fe8a886b75a9b403151d8e71fa7e362e3 (br-b04fba056185): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220601112038-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220601112038-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b04fba056185ecf98dc6a497fcc8e32fe8a886b75a9b403151d8e71fa7e362e3 (br-b04fba056185): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:31:20.827722    7960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:31:21.974477    7960 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1467414s)
	I0601 11:31:21.981730    7960 cli_runner.go:164] Run: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:31:23.159980    7960 cli_runner.go:211] docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:31:23.160043    7960 cli_runner.go:217] Completed: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1779527s)
	I0601 11:31:23.160043    7960 client.go:171] LocalClient.Create took 6.9321186s
	I0601 11:31:25.169273    7960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:25.177506    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:26.292052    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:26.292052    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1145335s)
	I0601 11:31:26.292052    7960 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:26.634063    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:27.752708    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:27.752708    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1186327s)
	W0601 11:31:27.752708    7960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:31:27.752708    7960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:27.763712    7960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:27.770726    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:28.909671    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:28.909755    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1389057s)
	I0601 11:31:28.910058    7960 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:29.139167    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:30.280940    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:30.281026    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1414829s)
	W0601 11:31:30.281026    7960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:31:30.281026    7960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:30.281026    7960 start.go:134] duration metric: createHost completed in 14.0584596s
	I0601 11:31:30.292088    7960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:30.299157    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:31.449443    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:31.449443    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1502727s)
	I0601 11:31:31.449443    7960 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:31.708610    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:32.897348    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:32.897441    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.1887239s)
	W0601 11:31:32.897770    7960 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:31:32.897810    7960 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:32.909183    7960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:32.916182    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:34.498474    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:34.498474    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (1.582123s)
	I0601 11:31:34.498656    7960 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:34.705091    7960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404
	W0601 11:31:37.419007    7960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404 returned with exit code 1
	I0601 11:31:37.419007    7960 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: (2.713886s)
	W0601 11:31:37.419007    7960 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	
	W0601 11:31:37.419007    7960 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220601112038-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220601112038-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220601112038-9404
	I0601 11:31:37.419007    7960 fix.go:57] fixHost completed within 49.7647949s
	I0601 11:31:37.419007    7960 start.go:81] releasing machines lock for "calico-20220601112038-9404", held for 49.7647949s
	W0601 11:31:37.419007    7960 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p calico-20220601112038-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	I0601 11:31:37.433041    7960 out.go:177] 
	W0601 11:31:37.435005    7960 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220601112038-9404 container: docker volume create calico-20220601112038-9404 --label name.minikube.sigs.k8s.io=calico-20220601112038-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220601112038-9404: error while creating volume root path '/var/lib/docker/volumes/calico-20220601112038-9404': mkdir /var/lib/docker/volumes/calico-20220601112038-9404: read-only file system
	
	W0601 11:31:37.435005    7960 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:31:37.435005    7960 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:31:37.438008    7960 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/calico/Start (80.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220601112030-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20220601112030-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 60 (1m21.5757959s)

                                                
                                                
-- stdout --
	* [false-20220601112030-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node false-20220601112030-9404 in cluster false-20220601112030-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20220601112030-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:31:32.990253    5000 out.go:296] Setting OutFile to fd 1428 ...
	I0601 11:31:33.067124    5000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:31:33.067124    5000 out.go:309] Setting ErrFile to fd 1940...
	I0601 11:31:33.067216    5000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:31:33.080490    5000 out.go:303] Setting JSON to false
	I0601 11:31:33.082856    5000 start.go:115] hostinfo: {"hostname":"minikube2","uptime":15028,"bootTime":1654068065,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:31:33.082856    5000 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:31:33.089921    5000 out.go:177] * [false-20220601112030-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:31:33.092738    5000 notify.go:193] Checking for updates...
	I0601 11:31:33.095671    5000 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:31:33.098023    5000 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:31:33.100945    5000 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:31:33.104617    5000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:31:33.115870    5000 config.go:178] Loaded profile config "calico-20220601112038-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:33.122229    5000 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:33.140125    5000 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:33.147650    5000 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:33.147650    5000 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:31:36.529122    5000 docker.go:137] docker version: linux-20.10.14
	I0601 11:31:36.540603    5000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:31:40.864637    5000 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (4.3239857s)
	I0601 11:31:40.865380    5000 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:31:37.6329045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:31:40.869813    5000 out.go:177] * Using the docker driver based on user configuration
	I0601 11:31:40.872274    5000 start.go:284] selected driver: docker
	I0601 11:31:40.872274    5000 start.go:806] validating driver "docker" against <nil>
	I0601 11:31:40.872364    5000 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:31:40.970576    5000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:31:43.147131    5000 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1765312s)
	I0601 11:31:43.147488    5000 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:31:42.0362044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:31:43.147488    5000 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:31:43.148405    5000 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:31:43.151131    5000 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:31:43.153441    5000 cni.go:95] Creating CNI manager for "false"
	I0601 11:31:43.153491    5000 start_flags.go:306] config:
	{Name:false-20220601112030-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220601112030-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:31:43.156124    5000 out.go:177] * Starting control plane node false-20220601112030-9404 in cluster false-20220601112030-9404
	I0601 11:31:43.161853    5000 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:31:43.164185    5000 out.go:177] * Pulling base image ...
	I0601 11:31:43.166660    5000 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:31:43.166660    5000 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:31:43.166856    5000 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:31:43.166964    5000 cache.go:57] Caching tarball of preloaded images
	I0601 11:31:43.167411    5000 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:31:43.167587    5000 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:31:43.167915    5000 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220601112030-9404\config.json ...
	I0601 11:31:43.168314    5000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220601112030-9404\config.json: {Name:mk850c00b0922cbc86d3a7f6fc5585b73a93f0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:31:44.309962    5000 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:31:44.309962    5000 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:31:44.310738    5000 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:31:44.310738    5000 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:31:44.310738    5000 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:31:44.310738    5000 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:31:44.311269    5000 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:31:44.311269    5000 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:31:44.311269    5000 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:31:46.665478    5000 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:31:46.666265    5000 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:31:46.666265    5000 start.go:352] acquiring machines lock for false-20220601112030-9404: {Name:mkf8afe9f26a0b34411c385791f7f2cc43999365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:31:46.666265    5000 start.go:356] acquired machines lock for "false-20220601112030-9404" in 0s
	I0601 11:31:46.666265    5000 start.go:91] Provisioning new machine with config: &{Name:false-20220601112030-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220601112030-9404 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:31:46.666799    5000 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:31:46.671627    5000 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:31:46.671627    5000 start.go:165] libmachine.API.Create for "false-20220601112030-9404" (driver="docker")
	I0601 11:31:46.671627    5000 client.go:168] LocalClient.Create starting
	I0601 11:31:46.672404    5000 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:31:46.672404    5000 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:46.672404    5000 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:46.672995    5000 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:31:46.673142    5000 main.go:134] libmachine: Decoding PEM data...
	I0601 11:31:46.673142    5000 main.go:134] libmachine: Parsing certificate...
	I0601 11:31:46.682087    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:31:47.822541    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:31:47.822541    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1401802s)
	I0601 11:31:47.829881    5000 network_create.go:272] running [docker network inspect false-20220601112030-9404] to gather additional debugging logs...
	I0601 11:31:47.829881    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404
	W0601 11:31:48.928047    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 returned with exit code 1
	I0601 11:31:48.928047    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404: (1.0981535s)
	I0601 11:31:48.928047    5000 network_create.go:275] error running [docker network inspect false-20220601112030-9404]: docker network inspect false-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220601112030-9404
	I0601 11:31:48.928047    5000 network_create.go:277] output of [docker network inspect false-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220601112030-9404
	
	** /stderr **
	I0601 11:31:48.934071    5000 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:31:50.025578    5000 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0914941s)
	I0601 11:31:50.045921    5000 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000478100] misses:0}
	I0601 11:31:50.046441    5000 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:31:50.046441    5000 network_create.go:115] attempt to create docker network false-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:31:50.054032    5000 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404
	W0601 11:31:51.139103    5000 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404 returned with exit code 1
	I0601 11:31:51.139103    5000 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: (1.0848121s)
	E0601 11:31:51.139103    5000 network_create.go:104] error while trying to create docker network false-20220601112030-9404 192.168.49.0/24: create docker network false-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b14052b4c9654aa306051a97711d76e7d03cf57b16cd1a680cdd5b62af2e2f2 (br-2b14052b4c96): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:31:51.139103    5000 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b14052b4c9654aa306051a97711d76e7d03cf57b16cd1a680cdd5b62af2e2f2 (br-2b14052b4c96): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220601112030-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2b14052b4c9654aa306051a97711d76e7d03cf57b16cd1a680cdd5b62af2e2f2 (br-2b14052b4c96): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:31:51.152235    5000 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:31:52.280544    5000 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1282967s)
	I0601 11:31:52.286546    5000 cli_runner.go:164] Run: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:31:53.418699    5000 cli_runner.go:211] docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:31:53.418699    5000 cli_runner.go:217] Completed: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1321408s)
	I0601 11:31:53.418699    5000 client.go:171] LocalClient.Create took 6.7469977s
	I0601 11:31:55.443028    5000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:31:55.451879    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:31:56.564912    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:31:56.564912    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1130211s)
	I0601 11:31:56.565261    5000 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:31:56.859755    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:31:57.976842    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:31:57.976842    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1170748s)
	W0601 11:31:57.976842    5000 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:31:57.976842    5000 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:31:57.985839    5000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:31:57.991847    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:31:59.105010    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:31:59.105010    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1131501s)
	I0601 11:31:59.105010    5000 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:31:59.408359    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:00.558693    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:00.558751    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1502505s)
	W0601 11:32:00.558751    5000 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:32:00.558751    5000 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:00.558751    5000 start.go:134] duration metric: createHost completed in 13.8916673s
	I0601 11:32:00.558751    5000 start.go:81] releasing machines lock for "false-20220601112030-9404", held for 13.8923319s
	W0601 11:32:00.558751    5000 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	I0601 11:32:00.579770    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:01.718455    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:01.718667    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.138673s)
	I0601 11:32:01.718718    5000 delete.go:82] Unable to get host status for false-20220601112030-9404, assuming it has already been deleted: state: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	W0601 11:32:01.719197    5000 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	I0601 11:32:01.719197    5000 start.go:614] Will try again in 5 seconds ...
	I0601 11:32:06.722110    5000 start.go:352] acquiring machines lock for false-20220601112030-9404: {Name:mkf8afe9f26a0b34411c385791f7f2cc43999365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:32:06.722110    5000 start.go:356] acquired machines lock for "false-20220601112030-9404" in 0s
	I0601 11:32:06.722110    5000 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:32:06.722110    5000 fix.go:55] fixHost starting: 
	I0601 11:32:06.737574    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:07.854617    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:07.854678    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.1169661s)
	I0601 11:32:07.854735    5000 fix.go:103] recreateIfNeeded on false-20220601112030-9404: state= err=unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:07.854858    5000 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:32:07.862409    5000 out.go:177] * docker "false-20220601112030-9404" container is missing, will recreate.
	I0601 11:32:07.864289    5000 delete.go:124] DEMOLISHING false-20220601112030-9404 ...
	I0601 11:32:07.877091    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:09.004693    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:09.004693    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.1275902s)
	W0601 11:32:09.004693    5000 stop.go:75] unable to get state: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:09.004693    5000 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:09.018717    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:10.119944    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:10.119944    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.1012146s)
	I0601 11:32:10.119944    5000 delete.go:82] Unable to get host status for false-20220601112030-9404, assuming it has already been deleted: state: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:10.128470    5000 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220601112030-9404
	W0601 11:32:11.230229    5000 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220601112030-9404 returned with exit code 1
	I0601 11:32:11.230229    5000 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220601112030-9404: (1.1015764s)
	I0601 11:32:11.230337    5000 kic.go:356] could not find the container false-20220601112030-9404 to remove it. will try anyways
	I0601 11:32:11.240068    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:12.334320    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:12.334320    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.0942397s)
	W0601 11:32:12.334320    5000 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:12.346330    5000 cli_runner.go:164] Run: docker exec --privileged -t false-20220601112030-9404 /bin/bash -c "sudo init 0"
	W0601 11:32:13.435194    5000 cli_runner.go:211] docker exec --privileged -t false-20220601112030-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:32:13.435194    5000 cli_runner.go:217] Completed: docker exec --privileged -t false-20220601112030-9404 /bin/bash -c "sudo init 0": (1.0886819s)
	I0601 11:32:13.435365    5000 oci.go:625] error shutdown false-20220601112030-9404: docker exec --privileged -t false-20220601112030-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:14.451486    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:15.563694    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:15.563751    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.111997s)
	I0601 11:32:15.563751    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:15.563751    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:15.563751    5000 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:16.042969    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:17.138952    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:17.139228    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.0959707s)
	I0601 11:32:17.139327    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:17.139327    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:17.139327    5000 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:18.042140    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:19.139017    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:19.139806    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.0967459s)
	I0601 11:32:19.139806    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:19.139917    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:19.139917    5000 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:19.799697    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:20.898541    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:20.898657    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.0986913s)
	I0601 11:32:20.898831    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:20.898903    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:20.898929    5000 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:22.021215    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:23.108440    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:23.108440    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.0872129s)
	I0601 11:32:23.108440    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:23.108440    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:23.108440    5000 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:24.636494    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:25.786871    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:25.786929    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.1501893s)
	I0601 11:32:25.786963    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:25.787026    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:25.787099    5000 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:28.838575    5000 cli_runner.go:164] Run: docker container inspect false-20220601112030-9404 --format={{.State.Status}}
	W0601 11:32:29.943866    5000 cli_runner.go:211] docker container inspect false-20220601112030-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:29.943921    5000 cli_runner.go:217] Completed: docker container inspect false-20220601112030-9404 --format={{.State.Status}}: (1.1050736s)
	I0601 11:32:29.944002    5000 oci.go:637] temporary error verifying shutdown: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:29.944002    5000 oci.go:639] temporary error: container false-20220601112030-9404 status is  but expect it to be exited
	I0601 11:32:29.944002    5000 oci.go:88] couldn't shut down false-20220601112030-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20220601112030-9404": docker container inspect false-20220601112030-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	 
	I0601 11:32:29.951400    5000 cli_runner.go:164] Run: docker rm -f -v false-20220601112030-9404
	I0601 11:32:31.079097    5000 cli_runner.go:217] Completed: docker rm -f -v false-20220601112030-9404: (1.1275053s)
	I0601 11:32:31.087338    5000 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220601112030-9404
	W0601 11:32:32.213612    5000 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220601112030-9404 returned with exit code 1
	I0601 11:32:32.213612    5000 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220601112030-9404: (1.1262615s)
	I0601 11:32:32.222122    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:33.350621    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:33.350718    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1282796s)
	I0601 11:32:33.359337    5000 network_create.go:272] running [docker network inspect false-20220601112030-9404] to gather additional debugging logs...
	I0601 11:32:33.359409    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404
	W0601 11:32:34.489604    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 returned with exit code 1
	I0601 11:32:34.489604    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404: (1.130183s)
	I0601 11:32:34.489604    5000 network_create.go:275] error running [docker network inspect false-20220601112030-9404]: docker network inspect false-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220601112030-9404
	I0601 11:32:34.489604    5000 network_create.go:277] output of [docker network inspect false-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220601112030-9404
	
	** /stderr **
	W0601 11:32:34.490601    5000 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:32:34.490601    5000 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:32:35.501866    5000 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:32:35.506768    5000 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:32:35.507015    5000 start.go:165] libmachine.API.Create for "false-20220601112030-9404" (driver="docker")
	I0601 11:32:35.507112    5000 client.go:168] LocalClient.Create starting
	I0601 11:32:35.507625    5000 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:32:35.507625    5000 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:35.507625    5000 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:35.507625    5000 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:32:35.508273    5000 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:35.508348    5000 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:35.517132    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:36.600804    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:36.600804    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0836598s)
	I0601 11:32:36.608866    5000 network_create.go:272] running [docker network inspect false-20220601112030-9404] to gather additional debugging logs...
	I0601 11:32:36.608866    5000 cli_runner.go:164] Run: docker network inspect false-20220601112030-9404
	W0601 11:32:37.733144    5000 cli_runner.go:211] docker network inspect false-20220601112030-9404 returned with exit code 1
	I0601 11:32:37.733289    5000 cli_runner.go:217] Completed: docker network inspect false-20220601112030-9404: (1.1242184s)
	I0601 11:32:37.733337    5000 network_create.go:275] error running [docker network inspect false-20220601112030-9404]: docker network inspect false-20220601112030-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220601112030-9404
	I0601 11:32:37.733337    5000 network_create.go:277] output of [docker network inspect false-20220601112030-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220601112030-9404
	
	** /stderr **
	I0601 11:32:37.740414    5000 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:32:38.872754    5000 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1315855s)
	I0601 11:32:38.888136    5000 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000478100] amended:false}} dirty:map[] misses:0}
	I0601 11:32:38.888136    5000 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:38.904257    5000 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000478100] amended:true}} dirty:map[192.168.49.0:0xc000478100 192.168.58.0:0xc0001384f8] misses:0}
	I0601 11:32:38.904257    5000 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:38.904257    5000 network_create.go:115] attempt to create docker network false-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:32:38.912337    5000 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404
	W0601 11:32:40.003523    5000 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404 returned with exit code 1
	I0601 11:32:40.003523    5000 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: (1.0911744s)
	E0601 11:32:40.003523    5000 network_create.go:104] error while trying to create docker network false-20220601112030-9404 192.168.58.0/24: create docker network false-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0e8a91bfec6175e5cb8326cad082905e593e6d4cd9e87aab17774a0d134de728 (br-0e8a91bfec61): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:32:40.003523    5000 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0e8a91bfec6175e5cb8326cad082905e593e6d4cd9e87aab17774a0d134de728 (br-0e8a91bfec61): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220601112030-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220601112030-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0e8a91bfec6175e5cb8326cad082905e593e6d4cd9e87aab17774a0d134de728 (br-0e8a91bfec61): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:32:40.018320    5000 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:32:41.151039    5000 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.132502s)
	I0601 11:32:41.158957    5000 cli_runner.go:164] Run: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:32:42.260305    5000 cli_runner.go:211] docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:32:42.260305    5000 cli_runner.go:217] Completed: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1013349s)
	I0601 11:32:42.260305    5000 client.go:171] LocalClient.Create took 6.7531167s
	I0601 11:32:44.283220    5000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:32:44.288820    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:45.347916    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:45.347916    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.0590251s)
	I0601 11:32:45.347916    5000 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:45.686358    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:46.789099    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:46.789099    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.102729s)
	W0601 11:32:46.789099    5000 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:32:46.789099    5000 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:46.799074    5000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:32:46.805104    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:47.951512    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:47.951512    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1463954s)
	I0601 11:32:47.951512    5000 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:48.182727    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:49.303495    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:49.303495    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1207558s)
	W0601 11:32:49.303495    5000 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:32:49.303495    5000 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:49.303495    5000 start.go:134] duration metric: createHost completed in 13.8012505s
	I0601 11:32:49.315896    5000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:32:49.321366    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:50.457562    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:50.457562    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1361825s)
	I0601 11:32:50.457562    5000 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:50.709742    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:51.826318    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:51.826318    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.116563s)
	W0601 11:32:51.826318    5000 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:32:51.826318    5000 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:51.837523    5000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:32:51.843555    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:52.931094    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:52.931094    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.0875262s)
	I0601 11:32:52.931094    5000 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:53.144777    5000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404
	W0601 11:32:54.253664    5000 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404 returned with exit code 1
	I0601 11:32:54.253664    5000 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: (1.1087148s)
	W0601 11:32:54.253899    5000 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	
	W0601 11:32:54.254027    5000 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220601112030-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220601112030-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220601112030-9404
	I0601 11:32:54.254027    5000 fix.go:57] fixHost completed within 47.5313866s
	I0601 11:32:54.254080    5000 start.go:81] releasing machines lock for "false-20220601112030-9404", held for 47.5314395s
	W0601 11:32:54.254517    5000 out.go:239] * Failed to start docker container. Running "minikube delete -p false-20220601112030-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p false-20220601112030-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	I0601 11:32:54.260072    5000 out.go:177] 
	W0601 11:32:54.261894    5000 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220601112030-9404 container: docker volume create false-20220601112030-9404 --label name.minikube.sigs.k8s.io=false-20220601112030-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220601112030-9404: error while creating volume root path '/var/lib/docker/volumes/false-20220601112030-9404': mkdir /var/lib/docker/volumes/false-20220601112030-9404: read-only file system
	
	W0601 11:32:54.262656    5000 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:32:54.262699    5000 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:32:54.266464    5000 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/false/Start (81.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 60 (1m19.0787125s)

                                                
                                                
-- stdout --
	* [bridge-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node bridge-20220601112023-9404 in cluster bridge-20220601112023-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20220601112023-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:31:52.418368    9536 out.go:296] Setting OutFile to fd 1784 ...
	I0601 11:31:52.492324    9536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:31:52.492324    9536 out.go:309] Setting ErrFile to fd 1816...
	I0601 11:31:52.492873    9536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:31:52.504789    9536 out.go:303] Setting JSON to false
	I0601 11:31:52.508960    9536 start.go:115] hostinfo: {"hostname":"minikube2","uptime":15048,"bootTime":1654068064,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:31:52.509288    9536 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:31:52.514245    9536 out.go:177] * [bridge-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:31:52.520364    9536 notify.go:193] Checking for updates...
	I0601 11:31:52.524989    9536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:31:52.537617    9536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:31:52.540259    9536 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:31:52.545473    9536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:31:52.549558    9536 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:52.550254    9536 config.go:178] Loaded profile config "false-20220601112030-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:52.550866    9536 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:52.550866    9536 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:31:52.550866    9536 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:31:55.259916    9536 docker.go:137] docker version: linux-20.10.14
	I0601 11:31:55.268633    9536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:31:57.492917    9536 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2240616s)
	I0601 11:31:57.493899    9536 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:31:56.3484464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:31:57.499382    9536 out.go:177] * Using the docker driver based on user configuration
	I0601 11:31:57.502087    9536 start.go:284] selected driver: docker
	I0601 11:31:57.502087    9536 start.go:806] validating driver "docker" against <nil>
	I0601 11:31:57.502087    9536 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:31:57.589058    9536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:31:59.851732    9536 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2626489s)
	I0601 11:31:59.851732    9536 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:31:58.7395013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:31:59.852413    9536 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:31:59.853068    9536 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:31:59.856930    9536 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:31:59.859033    9536 cni.go:95] Creating CNI manager for "bridge"
	I0601 11:31:59.859033    9536 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0601 11:31:59.859033    9536 start_flags.go:306] config:
	{Name:bridge-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220601112023-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:31:59.861799    9536 out.go:177] * Starting control plane node bridge-20220601112023-9404 in cluster bridge-20220601112023-9404
	I0601 11:31:59.869826    9536 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:31:59.872013    9536 out.go:177] * Pulling base image ...
	I0601 11:31:59.875263    9536 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:31:59.876250    9536 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:31:59.876250    9536 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:31:59.876371    9536 cache.go:57] Caching tarball of preloaded images
	I0601 11:31:59.876538    9536 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:31:59.876538    9536 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:31:59.877057    9536 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220601112023-9404\config.json ...
	I0601 11:31:59.877057    9536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220601112023-9404\config.json: {Name:mkf45c4bf34f4d6f74a9864516f342415a37f53d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:32:01.025773    9536 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:32:01.025773    9536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:01.025773    9536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:01.025773    9536 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:32:01.025773    9536 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:32:01.025773    9536 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:32:01.025773    9536 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:32:01.025773    9536 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:32:01.025773    9536 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:03.473362    9536 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:32:03.473423    9536 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:32:03.473423    9536 start.go:352] acquiring machines lock for bridge-20220601112023-9404: {Name:mkdf4688f7b88074eff53791f9744ee7142a8c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:32:03.473758    9536 start.go:356] acquired machines lock for "bridge-20220601112023-9404" in 198.2µs
	I0601 11:32:03.473956    9536 start.go:91] Provisioning new machine with config: &{Name:bridge-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220601112023-9404 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:32:03.474127    9536 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:32:03.476609    9536 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:32:03.476609    9536 start.go:165] libmachine.API.Create for "bridge-20220601112023-9404" (driver="docker")
	I0601 11:32:03.476609    9536 client.go:168] LocalClient.Create starting
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:03.476609    9536 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:03.482415    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:04.553518    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:04.553518    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0710905s)
	I0601 11:32:04.560898    9536 network_create.go:272] running [docker network inspect bridge-20220601112023-9404] to gather additional debugging logs...
	I0601 11:32:04.561426    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404
	W0601 11:32:05.640563    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:05.640608    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404: (1.0790603s)
	I0601 11:32:05.640608    9536 network_create.go:275] error running [docker network inspect bridge-20220601112023-9404]: docker network inspect bridge-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220601112023-9404
	I0601 11:32:05.640608    9536 network_create.go:277] output of [docker network inspect bridge-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220601112023-9404
	
	** /stderr **
	I0601 11:32:05.648258    9536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:32:06.799154    9536 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1507566s)
	I0601 11:32:06.820307    9536 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003be2b0] misses:0}
	I0601 11:32:06.820307    9536 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:06.820307    9536 network_create.go:115] attempt to create docker network bridge-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:32:06.829271    9536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404
	W0601 11:32:07.992825    9536 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:07.992825    9536 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: (1.1633082s)
	E0601 11:32:07.993111    9536 network_create.go:104] error while trying to create docker network bridge-20220601112023-9404 192.168.49.0/24: create docker network bridge-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0278b28f051c99442c4010e2ddf5c67ea346cd2d7ecc22592c251bf23b988e2 (br-f0278b28f051): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:32:07.993332    9536 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0278b28f051c99442c4010e2ddf5c67ea346cd2d7ecc22592c251bf23b988e2 (br-f0278b28f051): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f0278b28f051c99442c4010e2ddf5c67ea346cd2d7ecc22592c251bf23b988e2 (br-f0278b28f051): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:32:08.011181    9536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:32:09.160124    9536 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1489304s)
	I0601 11:32:09.166104    9536 cli_runner.go:164] Run: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:32:10.259833    9536 cli_runner.go:211] docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:32:10.259833    9536 cli_runner.go:217] Completed: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0934796s)
	I0601 11:32:10.259910    9536 client.go:171] LocalClient.Create took 6.7832256s
	I0601 11:32:12.285859    9536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:32:12.293084    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:32:13.404810    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:13.404810    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1116646s)
	I0601 11:32:13.404810    9536 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:13.693540    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:32:14.817992    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:14.818300    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1243597s)
	W0601 11:32:14.818516    9536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:32:14.818555    9536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:14.832608    9536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:32:14.840757    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:32:15.987815    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:15.987923    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1468452s)
	I0601 11:32:15.988102    9536 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:16.296791    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:32:17.404700    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:17.404700    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1076864s)
	W0601 11:32:17.404985    9536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:32:17.405060    9536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:17.405060    9536 start.go:134] duration metric: createHost completed in 13.9307785s
	I0601 11:32:17.405060    9536 start.go:81] releasing machines lock for "bridge-20220601112023-9404", held for 13.9310839s
	W0601 11:32:17.405060    9536 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	I0601 11:32:17.418622    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:18.549893    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:18.549952    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1311825s)
	I0601 11:32:18.549952    9536 delete.go:82] Unable to get host status for bridge-20220601112023-9404, assuming it has already been deleted: state: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	W0601 11:32:18.549952    9536 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	I0601 11:32:18.549952    9536 start.go:614] Will try again in 5 seconds ...
	I0601 11:32:23.564871    9536 start.go:352] acquiring machines lock for bridge-20220601112023-9404: {Name:mkdf4688f7b88074eff53791f9744ee7142a8c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:32:23.565348    9536 start.go:356] acquired machines lock for "bridge-20220601112023-9404" in 212.9µs
	I0601 11:32:23.565665    9536 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:32:23.565665    9536 fix.go:55] fixHost starting: 
	I0601 11:32:23.579627    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:24.705687    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:24.705687    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1260474s)
	I0601 11:32:24.705687    9536 fix.go:103] recreateIfNeeded on bridge-20220601112023-9404: state= err=unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:24.705687    9536 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:32:24.709697    9536 out.go:177] * docker "bridge-20220601112023-9404" container is missing, will recreate.
	I0601 11:32:24.711701    9536 delete.go:124] DEMOLISHING bridge-20220601112023-9404 ...
	I0601 11:32:24.724689    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:25.911531    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:25.911531    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1868287s)
	W0601 11:32:25.911531    9536 stop.go:75] unable to get state: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:25.911531    9536 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:25.925864    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:27.032538    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:27.032538    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1066619s)
	I0601 11:32:27.032538    9536 delete.go:82] Unable to get host status for bridge-20220601112023-9404, assuming it has already been deleted: state: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:27.039534    9536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220601112023-9404
	W0601 11:32:28.151377    9536 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:28.151429    9536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220601112023-9404: (1.1117838s)
	I0601 11:32:28.151475    9536 kic.go:356] could not find the container bridge-20220601112023-9404 to remove it. will try anyways
	I0601 11:32:28.160139    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:29.253778    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:29.253778    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.0930921s)
	W0601 11:32:29.253778    9536 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:29.259768    9536 cli_runner.go:164] Run: docker exec --privileged -t bridge-20220601112023-9404 /bin/bash -c "sudo init 0"
	W0601 11:32:30.413009    9536 cli_runner.go:211] docker exec --privileged -t bridge-20220601112023-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:32:30.413009    9536 cli_runner.go:217] Completed: docker exec --privileged -t bridge-20220601112023-9404 /bin/bash -c "sudo init 0": (1.1529542s)
	I0601 11:32:30.413122    9536 oci.go:625] error shutdown bridge-20220601112023-9404: docker exec --privileged -t bridge-20220601112023-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:31.432334    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:32.563369    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:32.563369    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1309732s)
	I0601 11:32:32.563369    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:32.563369    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:32.563369    9536 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:33.048202    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:34.143980    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:34.143980    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.0957656s)
	I0601 11:32:34.143980    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:34.143980    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:34.143980    9536 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:35.048873    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:36.145871    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:36.145871    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.096986s)
	I0601 11:32:36.145871    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:36.145871    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:36.145871    9536 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:36.799249    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:37.919032    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:37.919253    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1197706s)
	I0601 11:32:37.919370    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:37.919409    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:37.919473    9536 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:39.041570    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:40.173944    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:40.173944    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1323613s)
	I0601 11:32:40.173944    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:40.173944    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:40.173944    9536 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:41.708232    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:42.818091    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:42.818145    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1096253s)
	I0601 11:32:42.818252    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:42.818293    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:42.818335    9536 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:45.876598    9536 cli_runner.go:164] Run: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}
	W0601 11:32:46.978208    9536 cli_runner.go:211] docker container inspect bridge-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:46.978208    9536 cli_runner.go:217] Completed: docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: (1.1015979s)
	I0601 11:32:46.978208    9536 oci.go:637] temporary error verifying shutdown: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:32:46.978208    9536 oci.go:639] temporary error: container bridge-20220601112023-9404 status is  but expect it to be exited
	I0601 11:32:46.978208    9536 oci.go:88] couldn't shut down bridge-20220601112023-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20220601112023-9404": docker container inspect bridge-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	 
	I0601 11:32:46.985265    9536 cli_runner.go:164] Run: docker rm -f -v bridge-20220601112023-9404
	I0601 11:32:48.112743    9536 cli_runner.go:217] Completed: docker rm -f -v bridge-20220601112023-9404: (1.1274649s)
	I0601 11:32:48.120291    9536 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220601112023-9404
	W0601 11:32:49.240534    9536 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:49.240534    9536 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} bridge-20220601112023-9404: (1.1202296s)
	I0601 11:32:49.247546    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:50.381422    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:50.381455    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1336807s)
	I0601 11:32:50.389940    9536 network_create.go:272] running [docker network inspect bridge-20220601112023-9404] to gather additional debugging logs...
	I0601 11:32:50.389992    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404
	W0601 11:32:51.529032    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:51.529106    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404: (1.1387409s)
	I0601 11:32:51.529106    9536 network_create.go:275] error running [docker network inspect bridge-20220601112023-9404]: docker network inspect bridge-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220601112023-9404
	I0601 11:32:51.529175    9536 network_create.go:277] output of [docker network inspect bridge-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220601112023-9404
	
	** /stderr **
	W0601 11:32:51.529939    9536 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:32:51.529939    9536 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:32:52.537431    9536 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:32:52.541981    9536 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:32:52.542249    9536 start.go:165] libmachine.API.Create for "bridge-20220601112023-9404" (driver="docker")
	I0601 11:32:52.542333    9536 client.go:168] LocalClient.Create starting
	I0601 11:32:52.542726    9536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:32:52.542726    9536 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:52.542726    9536 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:52.542726    9536 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:32:52.543432    9536 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:52.543490    9536 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:52.553264    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:53.624384    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:53.624384    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0711082s)
	I0601 11:32:53.631383    9536 network_create.go:272] running [docker network inspect bridge-20220601112023-9404] to gather additional debugging logs...
	I0601 11:32:53.631383    9536 cli_runner.go:164] Run: docker network inspect bridge-20220601112023-9404
	W0601 11:32:54.740327    9536 cli_runner.go:211] docker network inspect bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:54.740327    9536 cli_runner.go:217] Completed: docker network inspect bridge-20220601112023-9404: (1.1089314s)
	I0601 11:32:54.740327    9536 network_create.go:275] error running [docker network inspect bridge-20220601112023-9404]: docker network inspect bridge-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220601112023-9404
	I0601 11:32:54.740327    9536 network_create.go:277] output of [docker network inspect bridge-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220601112023-9404
	
	** /stderr **
	I0601 11:32:54.747523    9536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:32:55.871799    9536 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1241216s)
	I0601 11:32:55.893170    9536 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be2b0] amended:false}} dirty:map[] misses:0}
	I0601 11:32:55.893170    9536 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:55.912889    9536 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003be2b0] amended:true}} dirty:map[192.168.49.0:0xc0003be2b0 192.168.58.0:0xc000624b10] misses:0}
	I0601 11:32:55.912889    9536 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:55.912889    9536 network_create.go:115] attempt to create docker network bridge-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:32:55.923265    9536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404
	W0601 11:32:57.055650    9536 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404 returned with exit code 1
	I0601 11:32:57.055650    9536 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: (1.1323716s)
	E0601 11:32:57.055650    9536 network_create.go:104] error while trying to create docker network bridge-20220601112023-9404 192.168.58.0/24: create docker network bridge-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76c57be5e2821a01fe6b34a691c60d84acf969526eeb4039d736cd5744b1fbb1 (br-76c57be5e282): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:32:57.055650    9536 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76c57be5e2821a01fe6b34a691c60d84acf969526eeb4039d736cd5744b1fbb1 (br-76c57be5e282): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76c57be5e2821a01fe6b34a691c60d84acf969526eeb4039d736cd5744b1fbb1 (br-76c57be5e282): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:32:57.068881    9536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:32:58.193041    9536 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1241475s)
	I0601 11:32:58.200446    9536 cli_runner.go:164] Run: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:32:59.314965    9536 cli_runner.go:211] docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:32:59.314965    9536 cli_runner.go:217] Completed: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1145063s)
	I0601 11:32:59.314965    9536 client.go:171] LocalClient.Create took 6.7725561s
	I0601 11:33:01.329470    9536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:01.336642    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:02.424468    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:02.424468    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.087666s)
	I0601 11:33:02.424468    9536 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:02.779567    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:03.904249    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:03.904249    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1246696s)
	W0601 11:33:03.904249    9536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:33:03.904249    9536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:03.915570    9536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:03.920637    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:05.015219    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:05.015219    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.0945223s)
	I0601 11:33:05.015472    9536 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:05.244267    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:06.341690    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:06.341970    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.0974106s)
	W0601 11:33:06.342238    9536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:33:06.342277    9536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:06.342277    9536 start.go:134] duration metric: createHost completed in 13.8045198s
	I0601 11:33:06.353220    9536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:06.359318    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:07.484139    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:07.484139    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1245852s)
	I0601 11:33:07.484347    9536 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:07.735787    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:08.845485    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:08.845485    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1096856s)
	W0601 11:33:08.845485    9536 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:33:08.845485    9536 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:08.857700    9536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:08.864177    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:09.964960    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:09.964960    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.1007705s)
	I0601 11:33:09.964960    9536 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:10.173858    9536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404
	W0601 11:33:11.205345    9536 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404 returned with exit code 1
	I0601 11:33:11.205345    9536 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: (1.0314756s)
	W0601 11:33:11.205345    9536 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	
	W0601 11:33:11.205345    9536 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220601112023-9404
	I0601 11:33:11.205345    9536 fix.go:57] fixHost completed within 47.6391471s
	I0601 11:33:11.205345    9536 start.go:81] releasing machines lock for "bridge-20220601112023-9404", held for 47.6393825s
	W0601 11:33:11.206129    9536 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p bridge-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	I0601 11:33:11.211743    9536 out.go:177] 
	W0601 11:33:11.214237    9536 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220601112023-9404 container: docker volume create bridge-20220601112023-9404 --label name.minikube.sigs.k8s.io=bridge-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/bridge-20220601112023-9404': mkdir /var/lib/docker/volumes/bridge-20220601112023-9404: read-only file system
	
	W0601 11:33:11.214418    9536 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:33:11.214538    9536 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:33:11.217322    9536 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/bridge/Start (79.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220601112753-9404 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p newest-cni-20220601112753-9404 "sudo crictl images -o json": exit status 80 (3.3763714s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p newest-cni-20220601112753-9404 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1726184s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (3.0103384s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:06.033197    9044 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220601112753-9404 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220601112753-9404 --alsologtostderr -v=1: exit status 80 (3.3900393s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:32:06.325728   10188 out.go:296] Setting OutFile to fd 1964 ...
	I0601 11:32:06.420384   10188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:06.420384   10188 out.go:309] Setting ErrFile to fd 1572...
	I0601 11:32:06.420384   10188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:06.432425   10188 out.go:303] Setting JSON to false
	I0601 11:32:06.432425   10188 mustload.go:65] Loading cluster: newest-cni-20220601112753-9404
	I0601 11:32:06.433215   10188 config.go:178] Loaded profile config "newest-cni-20220601112753-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:06.445956   10188 cli_runner.go:164] Run: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}
	W0601 11:32:09.129466   10188 cli_runner.go:211] docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:09.129466   10188 cli_runner.go:217] Completed: docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: (2.6834801s)
	I0601 11:32:09.133828   10188 out.go:177] 
	W0601 11:32:09.136137   10188 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404
	
	W0601 11:32:09.136195   10188 out.go:239] * 
	* 
	W0601 11:32:09.428267   10188 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:32:09.431290   10188 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p newest-cni-20220601112753-9404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1973181s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (3.0037215s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:13.637937    3116 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601112753-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220601112753-9404: exit status 1 (1.1640972s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220601112753-9404 -n newest-cni-20220601112753-9404: exit status 7 (3.0634199s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:17.881667    6460 status.go:247] status error: host: state: unknown state "newest-cni-20220601112753-9404": docker container inspect newest-cni-20220601112753-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220601112753-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220601112753-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220601112749-9404" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1885427s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0621858s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:10.491894    7376 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (4.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220601112749-9404" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (338.8062ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220601112749-9404" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220601112749-9404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1401153s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0598422s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:15.050184    7688 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (4.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220601112749-9404 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220601112749-9404 "sudo crictl images -o json": exit status 80 (3.3447528s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_6.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220601112749-9404 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:306: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1492345s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (2.933165s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:22.472738    6492 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=1: exit status 80 (3.2529262s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:32:22.736606   10052 out.go:296] Setting OutFile to fd 1756 ...
	I0601 11:32:22.810200   10052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:22.810200   10052 out.go:309] Setting ErrFile to fd 1424...
	I0601 11:32:22.810200   10052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:22.843228   10052 out.go:303] Setting JSON to false
	I0601 11:32:22.843228   10052 mustload.go:65] Loading cluster: default-k8s-different-port-20220601112749-9404
	I0601 11:32:22.844316   10052 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:22.858879   10052 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}
	W0601 11:32:25.429712   10052 cli_runner.go:211] docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:32:25.429712   10052 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: (2.5708042s)
	I0601 11:32:25.433712   10052 out.go:177] 
	W0601 11:32:25.435714   10052 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404
	
	W0601 11:32:25.435714   10052 out.go:239] * 
	* 
	W0601 11:32:25.699161   10052 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:32:25.710753   10052 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220601112749-9404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1715648s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0353046s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:29.944002    5356 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601112749-9404
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220601112749-9404: exit status 1 (1.1889129s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220601112749-9404 -n default-k8s-different-port-20220601112749-9404: exit status 7 (3.0047129s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:34.159718    5340 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220601112749-9404": docker container inspect default-k8s-different-port-20220601112749-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220601112749-9404

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220601112749-9404" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (11.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 60 (1m17.4854157s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node enable-default-cni-20220601112023-9404 in cluster enable-default-cni-20220601112023-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20220601112023-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:32:34.563127   10096 out.go:296] Setting OutFile to fd 1412 ...
	I0601 11:32:34.618422   10096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:34.618422   10096 out.go:309] Setting ErrFile to fd 1732...
	I0601 11:32:34.618505   10096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:34.633326   10096 out.go:303] Setting JSON to false
	I0601 11:32:34.635294   10096 start.go:115] hostinfo: {"hostname":"minikube2","uptime":15090,"bootTime":1654068064,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:32:34.636113   10096 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:32:34.640222   10096 out.go:177] * [enable-default-cni-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:32:34.643612   10096 notify.go:193] Checking for updates...
	I0601 11:32:34.645216   10096 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:32:34.647825   10096 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:32:34.650885   10096 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:32:34.653332   10096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:32:34.657264   10096 config.go:178] Loaded profile config "bridge-20220601112023-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:34.657706   10096 config.go:178] Loaded profile config "default-k8s-different-port-20220601112749-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:34.658160   10096 config.go:178] Loaded profile config "false-20220601112030-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:34.658515   10096 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:34.658599   10096 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:32:37.401825   10096 docker.go:137] docker version: linux-20.10.14
	I0601 11:32:37.411823   10096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:32:39.659910   10096 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2480615s)
	I0601 11:32:39.659910   10096 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:32:38.4978738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:32:39.662900   10096 out.go:177] * Using the docker driver based on user configuration
	I0601 11:32:39.666900   10096 start.go:284] selected driver: docker
	I0601 11:32:39.666900   10096 start.go:806] validating driver "docker" against <nil>
	I0601 11:32:39.666900   10096 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:32:39.749376   10096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:32:41.884172   10096 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1347719s)
	I0601 11:32:41.884172   10096 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:32:40.8231644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:32:41.884172   10096 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	E0601 11:32:41.885457   10096 start_flags.go:444] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0601 11:32:41.885549   10096 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:32:41.888959   10096 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:32:41.891148   10096 cni.go:95] Creating CNI manager for "bridge"
	I0601 11:32:41.891148   10096 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0601 11:32:41.891148   10096 start_flags.go:306] config:
	{Name:enable-default-cni-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220601112023-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:32:41.895314   10096 out.go:177] * Starting control plane node enable-default-cni-20220601112023-9404 in cluster enable-default-cni-20220601112023-9404
	I0601 11:32:41.897248   10096 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:32:41.901271   10096 out.go:177] * Pulling base image ...
	I0601 11:32:41.904742   10096 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:32:41.904742   10096 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:32:41.904742   10096 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:32:41.904742   10096 cache.go:57] Caching tarball of preloaded images
	I0601 11:32:41.904742   10096 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:32:41.904742   10096 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:32:41.905875   10096 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220601112023-9404\config.json ...
	I0601 11:32:41.906069   10096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220601112023-9404\config.json: {Name:mkaff96b59f900a58166f89b961fe3437ff7c9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:32:43.036703   10096 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:32:43.036703   10096 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:43.036703   10096 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:43.036703   10096 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:32:43.036703   10096 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:32:43.036703   10096 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:32:43.036703   10096 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:32:43.036703   10096 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:32:43.036703   10096 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:45.381940   10096 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:32:45.382023   10096 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:32:45.382113   10096 start.go:352] acquiring machines lock for enable-default-cni-20220601112023-9404: {Name:mk2be5a342e3f2e8732c73ffc7245a2776525709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:32:45.382357   10096 start.go:356] acquired machines lock for "enable-default-cni-20220601112023-9404" in 220.9µs
	I0601 11:32:45.382562   10096 start.go:91] Provisioning new machine with config: &{Name:enable-default-cni-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:enable-default-cni-20220601112023
-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:32:45.382562   10096 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:32:45.386106   10096 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:32:45.386667   10096 start.go:165] libmachine.API.Create for "enable-default-cni-20220601112023-9404" (driver="docker")
	I0601 11:32:45.386831   10096 client.go:168] LocalClient.Create starting
	I0601 11:32:45.387696   10096 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:32:45.387758   10096 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:45.387758   10096 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:45.387758   10096 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:32:45.388307   10096 main.go:134] libmachine: Decoding PEM data...
	I0601 11:32:45.388307   10096 main.go:134] libmachine: Parsing certificate...
	I0601 11:32:45.398442   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:32:46.538669   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:32:46.538669   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1402142s)
	I0601 11:32:46.546737   10096 network_create.go:272] running [docker network inspect enable-default-cni-20220601112023-9404] to gather additional debugging logs...
	I0601 11:32:46.546737   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404
	W0601 11:32:47.703035   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:47.703035   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404: (1.1562843s)
	I0601 11:32:47.703035   10096 network_create.go:275] error running [docker network inspect enable-default-cni-20220601112023-9404]: docker network inspect enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220601112023-9404
	I0601 11:32:47.703035   10096 network_create.go:277] output of [docker network inspect enable-default-cni-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220601112023-9404
	
	** /stderr **
	I0601 11:32:47.712128   10096 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:32:48.843322   10096 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1311366s)
	I0601 11:32:48.869338   10096 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000606308] misses:0}
	I0601 11:32:48.869338   10096 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:32:48.869338   10096 network_create.go:115] attempt to create docker network enable-default-cni-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:32:48.880735   10096 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404
	W0601 11:32:50.021144   10096 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:50.021144   10096 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: (1.1403961s)
	E0601 11:32:50.021144   10096 network_create.go:104] error while trying to create docker network enable-default-cni-20220601112023-9404 192.168.49.0/24: create docker network enable-default-cni-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5638ac07f56978673fbd7075da116b8097626258988c1d02c58963145c8ffc56 (br-5638ac07f569): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:32:50.021144   10096 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5638ac07f56978673fbd7075da116b8097626258988c1d02c58963145c8ffc56 (br-5638ac07f569): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5638ac07f56978673fbd7075da116b8097626258988c1d02c58963145c8ffc56 (br-5638ac07f569): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:32:50.037800   10096 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:32:51.218344   10096 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1803061s)
	I0601 11:32:51.225127   10096 cli_runner.go:164] Run: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:32:52.333101   10096 cli_runner.go:211] docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:32:52.333101   10096 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1079613s)
	I0601 11:32:52.333101   10096 client.go:171] LocalClient.Create took 6.9461926s
	I0601 11:32:54.357273   10096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:32:54.363881   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:32:55.498952   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:55.498952   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.1350581s)
	I0601 11:32:55.498952   10096 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:32:55.788209   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:32:56.917439   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:56.917624   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.129032s)
	W0601 11:32:56.917825   10096 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:32:56.917862   10096 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:32:56.929593   10096 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:32:56.939684   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:32:58.051215   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:58.051321   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.1114167s)
	I0601 11:32:58.051403   10096 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:32:58.365576   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:32:59.518053   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:32:59.518053   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.1521702s)
	W0601 11:32:59.518327   10096 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:32:59.518439   10096 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:32:59.518439   10096 start.go:134] duration metric: createHost completed in 14.1357183s
	I0601 11:32:59.518439   10096 start.go:81] releasing machines lock for "enable-default-cni-20220601112023-9404", held for 14.1359236s
	W0601 11:32:59.518583   10096 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	I0601 11:32:59.533428   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:00.621290   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:00.621290   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0878495s)
	I0601 11:33:00.621290   10096 delete.go:82] Unable to get host status for enable-default-cni-20220601112023-9404, assuming it has already been deleted: state: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	W0601 11:33:00.621290   10096 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	I0601 11:33:00.621290   10096 start.go:614] Will try again in 5 seconds ...
	I0601 11:33:05.636165   10096 start.go:352] acquiring machines lock for enable-default-cni-20220601112023-9404: {Name:mk2be5a342e3f2e8732c73ffc7245a2776525709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:33:05.636360   10096 start.go:356] acquired machines lock for "enable-default-cni-20220601112023-9404" in 24.7µs
	I0601 11:33:05.636360   10096 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:33:05.636360   10096 fix.go:55] fixHost starting: 
	I0601 11:33:05.650897   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:06.749480   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:06.749480   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0985708s)
	I0601 11:33:06.749480   10096 fix.go:103] recreateIfNeeded on enable-default-cni-20220601112023-9404: state= err=unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:06.749480   10096 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:33:06.753737   10096 out.go:177] * docker "enable-default-cni-20220601112023-9404" container is missing, will recreate.
	I0601 11:33:06.756294   10096 delete.go:124] DEMOLISHING enable-default-cni-20220601112023-9404 ...
	I0601 11:33:06.771549   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:07.888467   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:07.888467   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.1169054s)
	W0601 11:33:07.888467   10096 stop.go:75] unable to get state: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:07.888467   10096 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:07.904044   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:09.014083   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:09.014083   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.1100263s)
	I0601 11:33:09.014083   10096 delete.go:82] Unable to get host status for enable-default-cni-20220601112023-9404, assuming it has already been deleted: state: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:09.020074   10096 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404
	W0601 11:33:10.072438   10096 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:10.072580   10096 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404: (1.0522831s)
	I0601 11:33:10.072580   10096 kic.go:356] could not find the container enable-default-cni-20220601112023-9404 to remove it. will try anyways
	I0601 11:33:10.079784   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:11.143949   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:11.143949   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0641526s)
	W0601 11:33:11.143949   10096 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:11.150933   10096 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-20220601112023-9404 /bin/bash -c "sudo init 0"
	W0601 11:33:12.224325   10096 cli_runner.go:211] docker exec --privileged -t enable-default-cni-20220601112023-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:33:12.224461   10096 cli_runner.go:217] Completed: docker exec --privileged -t enable-default-cni-20220601112023-9404 /bin/bash -c "sudo init 0": (1.0733809s)
	I0601 11:33:12.224510   10096 oci.go:625] error shutdown enable-default-cni-20220601112023-9404: docker exec --privileged -t enable-default-cni-20220601112023-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:13.239522   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:14.340699   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:14.340699   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.101164s)
	I0601 11:33:14.340865   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:14.340933   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:14.340989   10096 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:14.818127   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:15.929611   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:15.929611   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.1113125s)
	I0601 11:33:15.929798   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:15.929798   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:15.929861   10096 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:16.841251   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:17.933296   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:17.933358   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.09186s)
	I0601 11:33:17.933358   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:17.933358   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:17.933358   10096 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:18.578428   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:19.669609   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:19.669722   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0909416s)
	I0601 11:33:19.669722   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:19.669722   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:19.669722   10096 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:20.785558   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:21.860858   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:21.860894   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0751291s)
	I0601 11:33:21.861085   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:21.861153   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:21.861200   10096 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:23.382116   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:24.472119   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:24.472119   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0899916s)
	I0601 11:33:24.472119   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:24.472119   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:24.472119   10096 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:27.534822   10096 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:28.593233   10096 cli_runner.go:211] docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:28.593233   10096 cli_runner.go:217] Completed: docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: (1.0583989s)
	I0601 11:33:28.593581   10096 oci.go:637] temporary error verifying shutdown: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:28.593581   10096 oci.go:639] temporary error: container enable-default-cni-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:28.593581   10096 oci.go:88] couldn't shut down enable-default-cni-20220601112023-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220601112023-9404": docker container inspect enable-default-cni-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	 
	I0601 11:33:28.601096   10096 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-20220601112023-9404
	I0601 11:33:29.666655   10096 cli_runner.go:217] Completed: docker rm -f -v enable-default-cni-20220601112023-9404: (1.0655473s)
	I0601 11:33:29.674811   10096 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404
	W0601 11:33:30.712386   10096 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:30.712386   10096 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} enable-default-cni-20220601112023-9404: (1.0375637s)
	I0601 11:33:30.721515   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:31.770633   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:31.770633   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0491067s)
	I0601 11:33:31.778242   10096 network_create.go:272] running [docker network inspect enable-default-cni-20220601112023-9404] to gather additional debugging logs...
	I0601 11:33:31.778242   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404
	W0601 11:33:32.841393   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:32.841583   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404: (1.0631387s)
	I0601 11:33:32.841617   10096 network_create.go:275] error running [docker network inspect enable-default-cni-20220601112023-9404]: docker network inspect enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220601112023-9404
	I0601 11:33:32.841617   10096 network_create.go:277] output of [docker network inspect enable-default-cni-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220601112023-9404
	
	** /stderr **
	W0601 11:33:32.843271   10096 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:33:32.843297   10096 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:33:33.855281   10096 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:33:33.859050   10096 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:33:33.859567   10096 start.go:165] libmachine.API.Create for "enable-default-cni-20220601112023-9404" (driver="docker")
	I0601 11:33:33.859622   10096 client.go:168] LocalClient.Create starting
	I0601 11:33:33.860329   10096 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:33:33.860403   10096 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:33.860403   10096 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:33.860403   10096 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:33:33.861046   10096 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:33.861046   10096 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:33.869296   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:34.924844   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:34.924844   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0555355s)
	I0601 11:33:34.932654   10096 network_create.go:272] running [docker network inspect enable-default-cni-20220601112023-9404] to gather additional debugging logs...
	I0601 11:33:34.932654   10096 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220601112023-9404
	W0601 11:33:35.998134   10096 cli_runner.go:211] docker network inspect enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:35.998134   10096 cli_runner.go:217] Completed: docker network inspect enable-default-cni-20220601112023-9404: (1.0652434s)
	I0601 11:33:35.998134   10096 network_create.go:275] error running [docker network inspect enable-default-cni-20220601112023-9404]: docker network inspect enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220601112023-9404
	I0601 11:33:35.998134   10096 network_create.go:277] output of [docker network inspect enable-default-cni-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220601112023-9404
	
	** /stderr **
	I0601 11:33:36.005466   10096 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:33:37.047844   10096 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0420113s)
	I0601 11:33:37.065683   10096 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000606308] amended:false}} dirty:map[] misses:0}
	I0601 11:33:37.065683   10096 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:37.080943   10096 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000606308] amended:true}} dirty:map[192.168.49.0:0xc000606308 192.168.58.0:0xc0005ec878] misses:0}
	I0601 11:33:37.081436   10096 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:37.081436   10096 network_create.go:115] attempt to create docker network enable-default-cni-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:33:37.088411   10096 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404
	W0601 11:33:38.128509   10096 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:38.128509   10096 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: (1.0400864s)
	E0601 11:33:38.128509   10096 network_create.go:104] error while trying to create docker network enable-default-cni-20220601112023-9404 192.168.58.0/24: create docker network enable-default-cni-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94be38897d02ed325bb130244c33b03aa6781126c0266bc1bb3e2405637a3836 (br-94be38897d02): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:33:38.128509   10096 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94be38897d02ed325bb130244c33b03aa6781126c0266bc1bb3e2405637a3836 (br-94be38897d02): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94be38897d02ed325bb130244c33b03aa6781126c0266bc1bb3e2405637a3836 (br-94be38897d02): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:33:38.141899   10096 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:33:39.174282   10096 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0323717s)
	I0601 11:33:39.180259   10096 cli_runner.go:164] Run: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:33:40.203302   10096 cli_runner.go:211] docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:33:40.203302   10096 cli_runner.go:217] Completed: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0230315s)
	I0601 11:33:40.203302   10096 client.go:171] LocalClient.Create took 6.3436087s
	I0601 11:33:42.220844   10096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:42.226855   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:43.228226   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:43.228226   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0011647s)
	I0601 11:33:43.228434   10096 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:43.579688   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:44.625833   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:44.625871   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0460432s)
	W0601 11:33:44.626075   10096 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:33:44.626100   10096 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:44.636878   10096 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:44.642665   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:45.718223   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:45.718261   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.075411s)
	I0601 11:33:45.718460   10096 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:45.948277   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:46.980949   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:46.980949   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0326105s)
	W0601 11:33:46.980949   10096 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:33:46.980949   10096 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:46.980949   10096 start.go:134] duration metric: createHost completed in 13.1255206s
	I0601 11:33:46.990616   10096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:46.997116   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:48.068004   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:48.068047   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0706861s)
	I0601 11:33:48.068214   10096 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:48.331504   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:49.375125   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:49.375328   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0436089s)
	W0601 11:33:49.375538   10096 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:33:49.375568   10096 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:49.386170   10096 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:49.391869   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:50.434186   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:50.434220   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.0420111s)
	I0601 11:33:50.434220   10096 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:50.647845   10096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404
	W0601 11:33:51.757524   10096 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404 returned with exit code 1
	I0601 11:33:51.757524   10096 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: (1.1096666s)
	W0601 11:33:51.757524   10096 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	
	W0601 11:33:51.757524   10096 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220601112023-9404
	I0601 11:33:51.757524   10096 fix.go:57] fixHost completed within 46.1206478s
	I0601 11:33:51.757524   10096 start.go:81] releasing machines lock for "enable-default-cni-20220601112023-9404", held for 46.1206478s
	W0601 11:33:51.758247   10096 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	I0601 11:33:51.763298   10096 out.go:177] 
	W0601 11:33:51.765720   10096 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220601112023-9404 container: docker volume create enable-default-cni-20220601112023-9404 --label name.minikube.sigs.k8s.io=enable-default-cni-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220601112023-9404': mkdir /var/lib/docker/volumes/enable-default-cni-20220601112023-9404: read-only file system
	
	W0601 11:33:51.765837   10096 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:33:51.765837   10096 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:33:51.768960   10096 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (77.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (77.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220601112023-9404 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 60 (1m17.4339086s)

                                                
                                                
-- stdout --
	* [kubenet-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220601112023-9404 in cluster kubenet-20220601112023-9404
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20220601112023-9404" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:32:50.908531    8168 out.go:296] Setting OutFile to fd 1928 ...
	I0601 11:32:50.973550    8168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:50.973550    8168 out.go:309] Setting ErrFile to fd 1552...
	I0601 11:32:50.973550    8168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:32:50.989495    8168 out.go:303] Setting JSON to false
	I0601 11:32:50.991613    8168 start.go:115] hostinfo: {"hostname":"minikube2","uptime":15106,"bootTime":1654068064,"procs":159,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 11:32:50.991613    8168 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:32:50.995206    8168 out.go:177] * [kubenet-20220601112023-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 11:32:50.999409    8168 notify.go:193] Checking for updates...
	I0601 11:32:51.000768    8168 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 11:32:51.003922    8168 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 11:32:51.006479    8168 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:32:51.008852    8168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:32:51.012297    8168 config.go:178] Loaded profile config "bridge-20220601112023-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:51.013134    8168 config.go:178] Loaded profile config "enable-default-cni-20220601112023-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:51.013840    8168 config.go:178] Loaded profile config "false-20220601112030-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:51.013840    8168 config.go:178] Loaded profile config "multinode-20220601110036-9404-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:32:51.013840    8168 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:32:53.719321    8168 docker.go:137] docker version: linux-20.10.14
	I0601 11:32:53.726636    8168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:32:55.963350    8168 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2365757s)
	I0601 11:32:55.963558    8168 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:32:54.8299682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:32:55.968375    8168 out.go:177] * Using the docker driver based on user configuration
	I0601 11:32:55.971891    8168 start.go:284] selected driver: docker
	I0601 11:32:55.971891    8168 start.go:806] validating driver "docker" against <nil>
	I0601 11:32:55.971891    8168 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:32:56.472068    8168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:32:58.706517    8168 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.234424s)
	I0601 11:32:58.706517    8168 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 11:32:57.5563459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:32:58.707062    8168 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:32:58.707893    8168 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:32:58.711314    8168 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:32:58.713339    8168 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0601 11:32:58.713339    8168 start_flags.go:306] config:
	{Name:kubenet-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220601112023-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:32:58.716046    8168 out.go:177] * Starting control plane node kubenet-20220601112023-9404 in cluster kubenet-20220601112023-9404
	I0601 11:32:58.718735    8168 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:32:58.721825    8168 out.go:177] * Pulling base image ...
	I0601 11:32:58.723131    8168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:32:58.723131    8168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:32:58.724157    8168 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:32:58.724233    8168 cache.go:57] Caching tarball of preloaded images
	I0601 11:32:58.724417    8168 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:32:58.724417    8168 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:32:58.725002    8168 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220601112023-9404\config.json ...
	I0601 11:32:58.725214    8168 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220601112023-9404\config.json: {Name:mk001b1a7088501bc01282e451124627be70b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:32:59.850341    8168 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 11:32:59.850341    8168 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:59.850341    8168 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:32:59.850341    8168 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 11:32:59.850341    8168 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 11:32:59.850341    8168 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 11:32:59.850341    8168 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 11:32:59.850341    8168 cache.go:160] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from local cache
	I0601 11:32:59.850341    8168 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 11:33:02.193488    8168 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a from cached tarball
	I0601 11:33:02.193623    8168 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:33:02.193786    8168 start.go:352] acquiring machines lock for kubenet-20220601112023-9404: {Name:mk519e83b10124dcfd35da7dc282a2c0f414d831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:33:02.194062    8168 start.go:356] acquired machines lock for "kubenet-20220601112023-9404" in 200.6µs
	I0601 11:33:02.194216    8168 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220601112023-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220601112023-9404 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:33:02.194488    8168 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:33:02.201296    8168 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:33:02.202320    8168 start.go:165] libmachine.API.Create for "kubenet-20220601112023-9404" (driver="docker")
	I0601 11:33:02.202320    8168 client.go:168] LocalClient.Create starting
	I0601 11:33:02.202987    8168 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:33:02.202987    8168 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:02.202987    8168 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:02.203562    8168 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:33:02.203562    8168 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:02.203562    8168 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:02.214056    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:03.333506    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:03.333568    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1192777s)
	I0601 11:33:03.340437    8168 network_create.go:272] running [docker network inspect kubenet-20220601112023-9404] to gather additional debugging logs...
	I0601 11:33:03.340437    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404
	W0601 11:33:04.437660    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:04.437844    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404: (1.0972101s)
	I0601 11:33:04.437902    8168 network_create.go:275] error running [docker network inspect kubenet-20220601112023-9404]: docker network inspect kubenet-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220601112023-9404
	I0601 11:33:04.437953    8168 network_create.go:277] output of [docker network inspect kubenet-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220601112023-9404
	
	** /stderr **
	I0601 11:33:04.445286    8168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:33:05.559069    8168 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1135392s)
	I0601 11:33:05.580312    8168 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001ad250] misses:0}
	I0601 11:33:05.580312    8168 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:05.580312    8168 network_create.go:115] attempt to create docker network kubenet-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:33:05.587821    8168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404
	W0601 11:33:06.749480    8168 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:06.749480    8168 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: (1.1616467s)
	E0601 11:33:06.749480    8168 network_create.go:104] error while trying to create docker network kubenet-20220601112023-9404 192.168.49.0/24: create docker network kubenet-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 12ce2fec430805745a2c6552f368db999b924e05325593769a755a76a6da7a45 (br-12ce2fec4308): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	W0601 11:33:06.749480    8168 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 12ce2fec430805745a2c6552f368db999b924e05325593769a755a76a6da7a45 (br-12ce2fec4308): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220601112023-9404 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 12ce2fec430805745a2c6552f368db999b924e05325593769a755a76a6da7a45 (br-12ce2fec4308): conflicts with network 0c9673f752458c71f4a61225c7397b7e9bad054bf8922ff1a0be4c8ce3074a21 (br-0c9673f75245): networks have overlapping IPv4
	
	I0601 11:33:06.770477    8168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:33:07.872461    8168 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1019715s)
	I0601 11:33:07.879471    8168 cli_runner.go:164] Run: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:33:09.014083    8168 cli_runner.go:211] docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:33:09.014083    8168 cli_runner.go:217] Completed: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.1345988s)
	I0601 11:33:09.014083    8168 client.go:171] LocalClient.Create took 6.8116862s
	I0601 11:33:11.030270    8168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:11.039012    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:33:12.178344    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:12.178344    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.1393194s)
	I0601 11:33:12.178344    8168 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:12.466866    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:33:13.531998    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:13.532184    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0651204s)
	W0601 11:33:13.532399    8168 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:33:13.532455    8168 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:13.542731    8168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:13.549887    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:33:14.668741    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:14.668741    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.1186505s)
	I0601 11:33:14.669137    8168 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:14.977239    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:33:16.067788    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:16.067788    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0903496s)
	W0601 11:33:16.067788    8168 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:33:16.067788    8168 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:16.067788    8168 start.go:134] duration metric: createHost completed in 13.8731443s
	I0601 11:33:16.067788    8168 start.go:81] releasing machines lock for "kubenet-20220601112023-9404", held for 13.873503s
	W0601 11:33:16.067788    8168 start.go:599] error starting host: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	I0601 11:33:16.082747    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:17.176348    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:17.176579    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0935891s)
	I0601 11:33:17.176748    8168 delete.go:82] Unable to get host status for kubenet-20220601112023-9404, assuming it has already been deleted: state: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	W0601 11:33:17.176748    8168 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	I0601 11:33:17.176748    8168 start.go:614] Will try again in 5 seconds ...
	I0601 11:33:22.177601    8168 start.go:352] acquiring machines lock for kubenet-20220601112023-9404: {Name:mk519e83b10124dcfd35da7dc282a2c0f414d831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:33:22.177865    8168 start.go:356] acquired machines lock for "kubenet-20220601112023-9404" in 0s
	I0601 11:33:22.178254    8168 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:33:22.178254    8168 fix.go:55] fixHost starting: 
	I0601 11:33:22.191455    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:23.296487    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:23.296520    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.1048046s)
	I0601 11:33:23.296605    8168 fix.go:103] recreateIfNeeded on kubenet-20220601112023-9404: state= err=unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:23.296650    8168 fix.go:108] machineExists: false. err=machine does not exist
	I0601 11:33:23.299425    8168 out.go:177] * docker "kubenet-20220601112023-9404" container is missing, will recreate.
	I0601 11:33:23.304962    8168 delete.go:124] DEMOLISHING kubenet-20220601112023-9404 ...
	I0601 11:33:23.318690    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:24.409962    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:24.410045    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0910245s)
	W0601 11:33:24.410079    8168 stop.go:75] unable to get state: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:24.410079    8168 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:24.426114    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:25.470517    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:25.470517    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0443914s)
	I0601 11:33:25.470517    8168 delete.go:82] Unable to get host status for kubenet-20220601112023-9404, assuming it has already been deleted: state: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:25.478691    8168 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220601112023-9404
	W0601 11:33:26.496216    8168 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:26.496377    8168 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220601112023-9404: (1.0175142s)
	I0601 11:33:26.496377    8168 kic.go:356] could not find the container kubenet-20220601112023-9404 to remove it. will try anyways
	I0601 11:33:26.503826    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:27.511401    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:27.511401    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0075636s)
	W0601 11:33:27.511401    8168 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:27.520662    8168 cli_runner.go:164] Run: docker exec --privileged -t kubenet-20220601112023-9404 /bin/bash -c "sudo init 0"
	W0601 11:33:28.561648    8168 cli_runner.go:211] docker exec --privileged -t kubenet-20220601112023-9404 /bin/bash -c "sudo init 0" returned with exit code 1
	I0601 11:33:28.561648    8168 cli_runner.go:217] Completed: docker exec --privileged -t kubenet-20220601112023-9404 /bin/bash -c "sudo init 0": (1.0409747s)
	I0601 11:33:28.561648    8168 oci.go:625] error shutdown kubenet-20220601112023-9404: docker exec --privileged -t kubenet-20220601112023-9404 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:29.582379    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:30.635259    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:30.635471    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0526267s)
	I0601 11:33:30.635568    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:30.635619    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:30.635619    8168 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:31.114796    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:32.141737    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:32.141737    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0269294s)
	I0601 11:33:32.141737    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:32.141737    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:32.141737    8168 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:33.047497    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:34.090562    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:34.090562    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0430535s)
	I0601 11:33:34.090562    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:34.090562    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:34.090562    8168 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:34.747328    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:35.826870    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:35.826870    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0795293s)
	I0601 11:33:35.826870    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:35.826870    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:35.826870    8168 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:36.947392    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:38.051703    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:38.051703    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.1042984s)
	I0601 11:33:38.051703    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:38.051703    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:38.052049    8168 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:39.579531    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:40.600374    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:40.600374    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.020832s)
	I0601 11:33:40.600374    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:40.600374    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:40.600374    8168 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:43.657255    8168 cli_runner.go:164] Run: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}
	W0601 11:33:44.718326    8168 cli_runner.go:211] docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}} returned with exit code 1
	I0601 11:33:44.718326    8168 cli_runner.go:217] Completed: docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: (1.0610587s)
	I0601 11:33:44.718326    8168 oci.go:637] temporary error verifying shutdown: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:44.718326    8168 oci.go:639] temporary error: container kubenet-20220601112023-9404 status is  but expect it to be exited
	I0601 11:33:44.718326    8168 oci.go:88] couldn't shut down kubenet-20220601112023-9404 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20220601112023-9404": docker container inspect kubenet-20220601112023-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	 
	I0601 11:33:44.728862    8168 cli_runner.go:164] Run: docker rm -f -v kubenet-20220601112023-9404
	I0601 11:33:45.780784    8168 cli_runner.go:217] Completed: docker rm -f -v kubenet-20220601112023-9404: (1.0519106s)
	I0601 11:33:45.788498    8168 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220601112023-9404
	W0601 11:33:46.825190    8168 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:46.825329    8168 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubenet-20220601112023-9404: (1.036681s)
	I0601 11:33:46.833000    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:47.880012    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:47.880012    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.047s)
	I0601 11:33:47.886006    8168 network_create.go:272] running [docker network inspect kubenet-20220601112023-9404] to gather additional debugging logs...
	I0601 11:33:47.886006    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404
	W0601 11:33:48.933818    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:48.933877    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404: (1.0477998s)
	I0601 11:33:48.933877    8168 network_create.go:275] error running [docker network inspect kubenet-20220601112023-9404]: docker network inspect kubenet-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220601112023-9404
	I0601 11:33:48.933877    8168 network_create.go:277] output of [docker network inspect kubenet-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220601112023-9404
	
	** /stderr **
	W0601 11:33:48.935103    8168 delete.go:139] delete failed (probably ok) <nil>
	I0601 11:33:48.935103    8168 fix.go:115] Sleeping 1 second for extra luck!
	I0601 11:33:49.944739    8168 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:33:49.948865    8168 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0601 11:33:49.949098    8168 start.go:165] libmachine.API.Create for "kubenet-20220601112023-9404" (driver="docker")
	I0601 11:33:49.949098    8168 client.go:168] LocalClient.Create starting
	I0601 11:33:49.949828    8168 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0601 11:33:49.949828    8168 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:49.949828    8168 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:49.949828    8168 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0601 11:33:49.949828    8168 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:49.950411    8168 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:49.957755    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:51.015964    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:51.015964    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0581971s)
	I0601 11:33:51.022325    8168 network_create.go:272] running [docker network inspect kubenet-20220601112023-9404] to gather additional debugging logs...
	I0601 11:33:51.022325    8168 cli_runner.go:164] Run: docker network inspect kubenet-20220601112023-9404
	W0601 11:33:52.067936    8168 cli_runner.go:211] docker network inspect kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:52.067936    8168 cli_runner.go:217] Completed: docker network inspect kubenet-20220601112023-9404: (1.0454492s)
	I0601 11:33:52.068004    8168 network_create.go:275] error running [docker network inspect kubenet-20220601112023-9404]: docker network inspect kubenet-20220601112023-9404: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220601112023-9404
	I0601 11:33:52.068004    8168 network_create.go:277] output of [docker network inspect kubenet-20220601112023-9404]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220601112023-9404
	
	** /stderr **
	I0601 11:33:52.075987    8168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:33:53.172723    8168 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0967233s)
	I0601 11:33:53.192334    8168 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ad250] amended:false}} dirty:map[] misses:0}
	I0601 11:33:53.192401    8168 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:53.210212    8168 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ad250] amended:true}} dirty:map[192.168.49.0:0xc0001ad250 192.168.58.0:0xc0001ad0b0] misses:0}
	I0601 11:33:53.210212    8168 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:53.210212    8168 network_create.go:115] attempt to create docker network kubenet-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:33:53.216878    8168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404
	W0601 11:33:54.280049    8168 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:54.280049    8168 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: (1.0629407s)
	E0601 11:33:54.280133    8168 network_create.go:104] error while trying to create docker network kubenet-20220601112023-9404 192.168.58.0/24: create docker network kubenet-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94a6015fe02de29655cebdfc89822c6a7e333d4dd9fad1d48d23dc221f0231b2 (br-94a6015fe02d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	W0601 11:33:54.280196    8168 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94a6015fe02de29655cebdfc89822c6a7e333d4dd9fad1d48d23dc221f0231b2 (br-94a6015fe02d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220601112023-9404 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220601112023-9404: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 94a6015fe02de29655cebdfc89822c6a7e333d4dd9fad1d48d23dc221f0231b2 (br-94a6015fe02d): conflicts with network 50298ec259284aa8f019248a998613d3704636a9b9191ac4d06bfd538392d98f (br-50298ec25928): networks have overlapping IPv4
	
	I0601 11:33:54.292971    8168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:33:55.339880    8168 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0458778s)
	I0601 11:33:55.346826    8168 cli_runner.go:164] Run: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true
	W0601 11:33:56.399856    8168 cli_runner.go:211] docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0601 11:33:56.399856    8168 cli_runner.go:217] Completed: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: (1.0529041s)
	I0601 11:33:56.399954    8168 client.go:171] LocalClient.Create took 6.4507837s
	I0601 11:33:58.414797    8168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:58.421348    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:33:59.521410    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:33:59.521583    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.1000503s)
	I0601 11:33:59.521769    8168 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:33:59.872396    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:00.939901    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:00.939901    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0674934s)
	W0601 11:34:00.939901    8168 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:34:00.939901    8168 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:00.948914    8168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:34:00.954903    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:02.031118    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:02.031118    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0762032s)
	I0601 11:34:02.031118    8168 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:02.260328    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:03.344517    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:03.344517    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0840777s)
	W0601 11:34:03.344803    8168 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:34:03.344803    8168 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:03.344803    8168 start.go:134] duration metric: createHost completed in 13.3996893s
	I0601 11:34:03.354729    8168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:34:03.363177    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:04.456895    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:04.456895    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0934342s)
	I0601 11:34:04.456895    8168 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:04.712727    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:05.748815    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:05.748815    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0357897s)
	W0601 11:34:05.748815    8168 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:34:05.748815    8168 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:05.759068    8168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:34:05.765513    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:06.798818    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:06.798818    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0332943s)
	I0601 11:34:06.798818    8168 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:07.010516    8168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404
	W0601 11:34:08.045566    8168 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404 returned with exit code 1
	I0601 11:34:08.045566    8168 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: (1.0341438s)
	W0601 11:34:08.045566    8168 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	
	W0601 11:34:08.045566    8168 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220601112023-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220601112023-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220601112023-9404
	I0601 11:34:08.045566    8168 fix.go:57] fixHost completed within 45.8667981s
	I0601 11:34:08.045566    8168 start.go:81] releasing machines lock for "kubenet-20220601112023-9404", held for 45.8671864s
	W0601 11:34:08.046325    8168 out.go:239] * Failed to start docker container. Running "minikube delete -p kubenet-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubenet-20220601112023-9404" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	I0601 11:34:08.051060    8168 out.go:177] 
	W0601 11:34:08.052971    8168 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220601112023-9404 container: docker volume create kubenet-20220601112023-9404 --label name.minikube.sigs.k8s.io=kubenet-20220601112023-9404 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220601112023-9404: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220601112023-9404': mkdir /var/lib/docker/volumes/kubenet-20220601112023-9404: read-only file system
	
	W0601 11:34:08.052971    8168 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0601 11:34:08.053508    8168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0601 11:34:08.056253    8168 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kubenet/Start (77.52s)

                                                
                                    

Test pass (50/220)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.58
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.79
10 TestDownloadOnly/v1.23.6/json-events 14.54
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.57
16 TestDownloadOnly/DeleteAll 11.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 6.95
18 TestDownloadOnlyKic 45.12
19 TestBinaryMirror 16.62
33 TestErrorSpam/start 20.98
34 TestErrorSpam/status 8.34
35 TestErrorSpam/pause 9.01
36 TestErrorSpam/unpause 9.27
37 TestErrorSpam/stop 66.32
40 TestFunctional/serial/CopySyncFile 0.03
48 TestFunctional/serial/CacheCmd/cache/add_remote 10.61
50 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.33
51 TestFunctional/serial/CacheCmd/cache/list 0.35
54 TestFunctional/serial/CacheCmd/cache/delete 0.74
62 TestFunctional/parallel/ConfigCmd 2.27
64 TestFunctional/parallel/DryRun 13.11
65 TestFunctional/parallel/InternationalLanguage 5.37
71 TestFunctional/parallel/AddonsCmd 3.46
86 TestFunctional/parallel/Version/short 0.42
93 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
109 TestFunctional/parallel/ProfileCmd/profile_not_create 7.28
112 TestFunctional/parallel/ProfileCmd/profile_list 4.52
113 TestFunctional/parallel/ImageCommands/ImageRemove 6.11
114 TestFunctional/parallel/ProfileCmd/profile_json_output 4.51
117 TestFunctional/delete_addon-resizer_images 2.05
118 TestFunctional/delete_my-image_image 1.05
119 TestFunctional/delete_minikube_cached_images 1.05
125 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 2.81
138 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
152 TestErrorJSONOutput 7.29
155 TestKicCustomNetwork/use_default_bridge_network 229.08
158 TestMainNoArgs 0.32
192 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
193 TestStoppedBinaryUpgrade/Setup 0.55
259 TestStartStop/group/newest-cni/serial/DeployApp 0
260 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.04
272 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
273 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (17.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601102309-9404 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601102309-9404 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (17.580965s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220601102309-9404
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220601102309-9404: exit status 85 (786.5101ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:23:11
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:23:11.694810    6800 out.go:296] Setting OutFile to fd 604 ...
	I0601 10:23:11.750800    6800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:23:11.750800    6800 out.go:309] Setting ErrFile to fd 624...
	I0601 10:23:11.750800    6800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0601 10:23:11.779982    6800 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0601 10:23:11.785709    6800 out.go:303] Setting JSON to true
	I0601 10:23:11.788847    6800 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10927,"bootTime":1654068064,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:23:11.788847    6800 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:23:11.809438    6800 out.go:97] [download-only-20220601102309-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:23:11.809674    6800 notify.go:193] Checking for updates...
	W0601 10:23:11.809674    6800 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0601 10:23:11.812459    6800 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:23:11.814853    6800 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:23:11.818885    6800 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:23:11.822496    6800 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0601 10:23:11.827592    6800 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:23:11.828520    6800 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:23:14.463499    6800 docker.go:137] docker version: linux-20.10.14
	I0601 10:23:14.470982    6800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:23:16.540195    6800 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0691907s)
	I0601 10:23:16.541436    6800 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:23:15.479393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:23:16.555667    6800 out.go:97] Using the docker driver based on user configuration
	I0601 10:23:16.556014    6800 start.go:284] selected driver: docker
	I0601 10:23:16.556014    6800 start.go:806] validating driver "docker" against <nil>
	I0601 10:23:16.575192    6800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:23:18.592894    6800 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0176803s)
	I0601 10:23:18.593039    6800 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:23:17.5857439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:23:18.593039    6800 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:23:18.645218    6800 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0601 10:23:18.646205    6800 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 10:23:18.650962    6800 out.go:169] Using Docker Desktop driver with the root privilege
	I0601 10:23:18.653048    6800 cni.go:95] Creating CNI manager for ""
	I0601 10:23:18.653048    6800 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:23:18.653048    6800 start_flags.go:306] config:
	{Name:download-only-20220601102309-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601102309-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:23:18.655752    6800 out.go:97] Starting control plane node download-only-20220601102309-9404 in cluster download-only-20220601102309-9404
	I0601 10:23:18.655752    6800 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:23:18.657790    6800 out.go:97] Pulling base image ...
	I0601 10:23:18.658858    6800 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 10:23:18.659271    6800 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:23:18.705452    6800 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 10:23:18.705452    6800 cache.go:57] Caching tarball of preloaded images
	I0601 10:23:18.706060    6800 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 10:23:18.708579    6800 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0601 10:23:18.708632    6800 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:23:18.789489    6800 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 10:23:19.835930    6800 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:23:19.835930    6800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:23:19.835930    6800 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:23:19.835930    6800 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:23:19.837890    6800 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:23:22.031333    6800 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:23:22.033921    6800 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:23:23.106811    6800 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 10:23:23.107949    6800 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220601102309-9404\config.json ...
	I0601 10:23:23.108412    6800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220601102309-9404\config.json: {Name:mk4233ac63a541fed080cc15180980cf2a9b13f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:23:23.113137    6800 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 10:23:23.116157    6800 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601102309-9404"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (14.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601102309-9404 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220601102309-9404 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (14.5400784s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (14.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220601102309-9404
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220601102309-9404: exit status 85 (569.0861ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:23:28
	Running on machine: minikube2
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:23:28.566109    1212 out.go:296] Setting OutFile to fd 660 ...
	I0601 10:23:28.623286    1212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:23:28.623384    1212 out.go:309] Setting ErrFile to fd 596...
	I0601 10:23:28.623428    1212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0601 10:23:28.645027    1212 root.go:300] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0601 10:23:28.646391    1212 out.go:303] Setting JSON to true
	I0601 10:23:28.648942    1212 start.go:115] hostinfo: {"hostname":"minikube2","uptime":10944,"bootTime":1654068064,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:23:28.649049    1212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:23:28.653658    1212 out.go:97] [download-only-20220601102309-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:23:28.653765    1212 notify.go:193] Checking for updates...
	I0601 10:23:28.655767    1212 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:23:28.665718    1212 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:23:28.668686    1212 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:23:28.671619    1212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0601 10:23:28.675101    1212 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:23:28.676293    1212 config.go:178] Loaded profile config "download-only-20220601102309-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0601 10:23:28.676293    1212 start.go:714] api.Load failed for download-only-20220601102309-9404: filestore "download-only-20220601102309-9404": Docker machine "download-only-20220601102309-9404" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:23:28.676293    1212 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 10:23:28.676293    1212 start.go:714] api.Load failed for download-only-20220601102309-9404: filestore "download-only-20220601102309-9404": Docker machine "download-only-20220601102309-9404" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:23:31.162650    1212 docker.go:137] docker version: linux-20.10.14
	I0601 10:23:31.169711    1212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:23:33.147307    1212 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9773284s)
	I0601 10:23:33.148151    1212 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:23:32.1300514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:23:33.235138    1212 out.go:97] Using the docker driver based on existing profile
	I0601 10:23:33.235138    1212 start.go:284] selected driver: docker
	I0601 10:23:33.237129    1212 start.go:806] validating driver "docker" against &{Name:download-only-20220601102309-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601102309-9404 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:23:33.255079    1212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:23:35.249264    1212 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9941631s)
	I0601 10:23:35.249264    1212 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-01 10:23:34.2240836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:23:35.294877    1212 cni.go:95] Creating CNI manager for ""
	I0601 10:23:35.294877    1212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:23:35.294877    1212 start_flags.go:306] config:
	{Name:download-only-20220601102309-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220601102309-9404 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:23:35.369620    1212 out.go:97] Starting control plane node download-only-20220601102309-9404 in cluster download-only-20220601102309-9404
	I0601 10:23:35.369620    1212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:23:35.384996    1212 out.go:97] Pulling base image ...
	I0601 10:23:35.385073    1212 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 10:23:35.385172    1212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:23:35.439299    1212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 10:23:35.439299    1212 cache.go:57] Caching tarball of preloaded images
	I0601 10:23:35.439299    1212 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 10:23:35.442902    1212 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0601 10:23:35.442902    1212 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0601 10:23:35.511251    1212 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 10:23:36.601202    1212 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:23:36.601202    1212 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:23:36.601202    1212 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653677545-13807@sha256_312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a.tar
	I0601 10:23:36.601202    1212 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:23:36.601202    1212 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 10:23:36.601202    1212 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 10:23:36.601796    1212 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601102309-9404"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.1247637s)
--- PASS: TestDownloadOnly/DeleteAll (11.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (6.95s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220601102309-9404
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220601102309-9404: (6.9449701s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (6.95s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.12s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220601102408-9404 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220601102408-9404 --force --alsologtostderr --driver=docker: (35.9601772s)
helpers_test.go:175: Cleaning up "download-docker-20220601102408-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220601102408-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220601102408-9404: (8.0431396s)
--- PASS: TestDownloadOnlyKic (45.12s)

                                                
                                    
x
+
TestBinaryMirror (16.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220601102453-9404 --alsologtostderr --binary-mirror http://127.0.0.1:49938 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220601102453-9404 --alsologtostderr --binary-mirror http://127.0.0.1:49938 --driver=docker: (8.1216118s)
helpers_test.go:175: Cleaning up "binary-mirror-20220601102453-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220601102453-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220601102453-9404: (8.2716227s)
--- PASS: TestBinaryMirror (16.62s)

                                                
                                    
x
+
TestErrorSpam/start (20.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run: (6.9476292s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run: (6.9243536s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 start --dry-run: (7.101777s)
--- PASS: TestErrorSpam/start (20.98s)

                                                
                                    
x
+
TestErrorSpam/status (8.34s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status: exit status 7 (2.8104253s)

                                                
                                                
-- stdout --
	nospam-20220601102633-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:28:10.568863    7344 status.go:258] status error: host: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	E0601 10:28:10.568863    7344 status.go:261] The "nospam-20220601102633-9404" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 status" failed: exit status 7
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status: exit status 7 (2.7963717s)

                                                
                                                
-- stdout --
	nospam-20220601102633-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:28:13.381968    8900 status.go:258] status error: host: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	E0601 10:28:13.382033    8900 status.go:261] The "nospam-20220601102633-9404" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 status" failed: exit status 7
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 status: exit status 7 (2.7335559s)

                                                
                                                
-- stdout --
	nospam-20220601102633-9404
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:28:16.116742    8728 status.go:258] status error: host: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	E0601 10:28:16.116742    8728 status.go:261] The "nospam-20220601102633-9404" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 status" failed: exit status 7
--- PASS: TestErrorSpam/status (8.34s)

                                                
                                    
x
+
TestErrorSpam/pause (9.01s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause: exit status 80 (3.0116328s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause: exit status 80 (3.013613s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 pause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 pause: exit status 80 (2.9834227s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (9.01s)

                                                
                                    
x
+
TestErrorSpam/unpause (9.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause: exit status 80 (3.0381259s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 unpause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause: exit status 80 (3.0754012s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 unpause" failed: exit status 80
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 unpause: exit status 80 (3.1496553s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220601102633-9404": docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (9.27s)

                                                
                                    
x
+
TestErrorSpam/stop (66.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop: exit status 82 (22.0878465s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:28:39.598502    2480 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220601102633-9404: stopping schedule-stop service for profile nospam-20220601102633-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220601102633-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220601102633-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 stop" failed: exit status 82
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop: exit status 82 (22.2395451s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:29:01.773800    9980 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220601102633-9404: stopping schedule-stop service for profile nospam-20220601102633-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220601102633-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220601102633-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 stop" failed: exit status 82
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop
error_spam_test.go:179: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220601102633-9404 stop: exit status 82 (21.9902976s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	* Stopping node "nospam-20220601102633-9404"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 10:29:23.959074    3960 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220601102633-9404: stopping schedule-stop service for profile nospam-20220601102633-9404: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220601102633-9404": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220601102633-9404: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220601102633-9404 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220601102633-9404
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_233.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:181: "out/minikube-windows-amd64.exe -p nospam-20220601102633-9404 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220601102633-9404 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (66.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\9404\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:3.1: (3.5415933s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:3.3: (3.530151s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 cache add k8s.gcr.io/pause:latest: (3.5403799s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config get cpus: exit status 14 (358.6734ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 config get cpus: exit status 14 (326.7903ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.4057391s)

                                                
                                                
-- stdout --
	* [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:38.044142    5564 out.go:296] Setting OutFile to fd 724 ...
	I0601 10:36:38.122733    5564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:38.122832    5564 out.go:309] Setting ErrFile to fd 264...
	I0601 10:36:38.122867    5564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:38.137891    5564 out.go:303] Setting JSON to false
	I0601 10:36:38.142249    5564 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11733,"bootTime":1654068065,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:36:38.142477    5564 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:36:38.146449    5564 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:36:38.150524    5564 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:36:38.153421    5564 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:36:38.156036    5564 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:36:38.157609    5564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:36:38.162600    5564 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:36:38.164184    5564 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:36:40.918947    5564 docker.go:137] docker version: linux-20.10.14
	I0601 10:36:40.928227    5564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:36:43.088950    5564 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1605565s)
	I0601 10:36:43.089594    5564 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-01 10:36:41.9988533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:36:43.095385    5564 out.go:177] * Using the docker driver based on existing profile
	I0601 10:36:43.097217    5564 start.go:284] selected driver: docker
	I0601 10:36:43.097217    5564 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:36:43.097796    5564 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:36:43.149264    5564 out.go:177] 
	W0601 10:36:43.151759    5564 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 10:36:43.154735    5564 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --alsologtostderr -v=1 --driver=docker: (7.7056803s)
--- PASS: TestFunctional/parallel/DryRun (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220601102952-9404 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.3672482s)

                                                
                                                
-- stdout --
	* [functional-20220601102952-9404] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:36:32.664076    6248 out.go:296] Setting OutFile to fd 884 ...
	I0601 10:36:32.738293    6248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:32.738376    6248 out.go:309] Setting ErrFile to fd 676...
	I0601 10:36:32.738376    6248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:36:32.751759    6248 out.go:303] Setting JSON to false
	I0601 10:36:32.754260    6248 start.go:115] hostinfo: {"hostname":"minikube2","uptime":11728,"bootTime":1654068064,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0601 10:36:32.754260    6248 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:36:32.759035    6248 out.go:177] * [functional-20220601102952-9404] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0601 10:36:32.766152    6248 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0601 10:36:32.768411    6248 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0601 10:36:32.770945    6248 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:36:32.774328    6248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:36:32.777207    6248 config.go:178] Loaded profile config "functional-20220601102952-9404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 10:36:32.778383    6248 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:36:35.525083    6248 docker.go:137] docker version: linux-20.10.14
	I0601 10:36:35.534224    6248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:36:37.667603    6248 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1333108s)
	I0601 10:36:37.667993    6248 info.go:265] docker info: {ID:2BZM:7NDE:77NB:RGYB:ZWUY:UDIE:SXHS:57OG:L5H5:ZN37:54KV:SCXQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-01 10:36:36.5973573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:36:37.671587    6248 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0601 10:36:37.674492    6248 start.go:284] selected driver: docker
	I0601 10:36:37.674492    6248 start.go:806] validating driver "docker" against &{Name:functional-20220601102952-9404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102952-9404 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:36:37.674492    6248 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:36:37.741595    6248 out.go:177] 
	W0601 10:36:37.744105    6248 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 10:36:37.746159    6248 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 addons list: (3.1014773s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 version --short
--- PASS: TestFunctional/parallel/Version/short (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220601102952-9404 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220601102952-9404 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7416: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.0744969s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.2048793s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.1196294s)
functional_test.go:1310: Took "4.1199264s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1324: Took "403.3026ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image rm gcr.io/google-containers/addon-resizer:functional-20220601102952-9404

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image rm gcr.io/google-containers/addon-resizer:functional-20220601102952-9404: (3.137703s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220601102952-9404 image ls: (2.9733625s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.1486637s)
functional_test.go:1361: Took "4.1488196s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "361.6111ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (4.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (2.05s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: (1.0147326s)
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601102952-9404
functional_test.go:185: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601102952-9404: (1.0185866s)
--- PASS: TestFunctional/delete_addon-resizer_images (2.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (1.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601102952-9404
functional_test.go:193: (dbg) Done: docker rmi -f localhost/my-image:functional-20220601102952-9404: (1.0382862s)
--- PASS: TestFunctional/delete_my-image_image (1.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (1.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601102952-9404
functional_test.go:201: (dbg) Done: docker rmi -f minikube-local-cache-test:functional-20220601102952-9404: (1.0397893s)
--- PASS: TestFunctional/delete_minikube_cached_images (1.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601104200-9404 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220601104200-9404 addons enable ingress-dns --alsologtostderr -v=5: (2.8145343s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.81s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220601104530-9404 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220601104530-9404 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (361.5412ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f35ace3-bff5-4afd-8221-2f1a8a0b27d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601104530-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78c6e856-19e0-4dda-8a73-7c824e462520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"db510bbb-c256-423a-913f-dcecdc0db27b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2f68bc8d-d07e-429d-8b7e-ff7fdd6d0068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"5bfd0ad0-f5ea-4d38-a340-7c1847911a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"63267c69-1df8-43d7-b9b2-d4b032378acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601104530-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220601104530-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220601104530-9404: (6.9281018s)
--- PASS: TestErrorJSONOutput (7.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (229.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220601104938-9404 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220601104938-9404 --network=bridge: (3m8.2695507s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0633158s)
helpers_test.go:175: Cleaning up "docker-network-20220601104938-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220601104938-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220601104938-9404: (39.7412081s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (229.08s)

                                                
                                    
x
+
TestMainNoArgs (0.32s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220601111410-9404 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (479.3606ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601111410-9404] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220601112753-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220601112753-9404 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.0358383s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/220)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601102952-9404 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2198486036\001
functional_test.go:1069: (dbg) Non-zero exit: docker build -t minikube-local-cache-test:functional-20220601102952-9404 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2198486036\001: exit status 1 (1.0529987s)

                                                
                                                
** stderr ** 
	#2 [internal] load .dockerignore
	#2 sha256:f075ab3cb7be15abe5239cb29323520332ff067571bbeab6bb29be5091df2056
	#2 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	
	#1 [internal] load build definition from Dockerfile
	#1 sha256:f8d158b547a675ec01a6320506f9e20aadee1a3cf1b9833b5768d094bdf0b547
	#1 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	------
	 > [internal] load .dockerignore:
	------
	------
	 > [internal] load build definition from Dockerfile:
	------
	failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system

                                                
                                                
** /stderr **
functional_test.go:1071: failed to build docker image, skipping local test: exit status 1
--- SKIP: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220601102952-9404 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220601102952-9404 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220601112742-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220601112742-9404

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220601112742-9404: (7.4339686s)
--- SKIP: TestStartStop/group/disable-driver-mounts (7.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (7.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220601112023-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220601112023-9404
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220601112023-9404: (7.5633625s)
--- SKIP: TestNetworkPlugins/group/flannel (7.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (7.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220601112030-9404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220601112030-9404

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220601112030-9404: (7.4577081s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (7.46s)

                                                
                                    
Copied to clipboard